00:00:00.000 Started by upstream project "autotest-per-patch" build number 132117 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.107 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.108 The recommended git tool is: git 00:00:00.108 using credential 00000000-0000-0000-0000-000000000002 00:00:00.111 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.158 Fetching changes from the remote Git repository 00:00:00.159 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.207 Using shallow fetch with depth 1 00:00:00.207 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.207 > git --version # timeout=10 00:00:00.254 > git --version # 'git version 2.39.2' 00:00:00.254 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.282 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.282 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.765 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.775 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.788 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.788 > git config core.sparsecheckout # timeout=10 00:00:06.799 > git read-tree -mu HEAD # timeout=10 00:00:06.816 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.833 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.834 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.939 [Pipeline] Start of Pipeline 00:00:06.950 [Pipeline] library 00:00:06.952 Loading library shm_lib@master 00:00:06.952 Library shm_lib@master is cached. Copying from home. 00:00:06.970 [Pipeline] node 00:00:06.977 Running on CYP13 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.979 [Pipeline] { 00:00:06.990 [Pipeline] catchError 00:00:06.992 [Pipeline] { 00:00:07.005 [Pipeline] wrap 00:00:07.014 [Pipeline] { 00:00:07.022 [Pipeline] stage 00:00:07.024 [Pipeline] { (Prologue) 00:00:07.299 [Pipeline] sh 00:00:07.585 + logger -p user.info -t JENKINS-CI 00:00:07.608 [Pipeline] echo 00:00:07.611 Node: CYP13 00:00:07.619 [Pipeline] sh 00:00:07.928 [Pipeline] setCustomBuildProperty 00:00:07.939 [Pipeline] echo 00:00:07.940 Cleanup processes 00:00:07.945 [Pipeline] sh 00:00:08.233 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.234 2077997 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.250 [Pipeline] sh 00:00:08.539 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.539 ++ grep -v 'sudo pgrep' 00:00:08.539 ++ awk '{print $1}' 00:00:08.539 + sudo kill -9 00:00:08.539 + true 00:00:08.554 [Pipeline] cleanWs 00:00:08.565 [WS-CLEANUP] Deleting project workspace... 00:00:08.565 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.572 [WS-CLEANUP] done 00:00:08.575 [Pipeline] setCustomBuildProperty 00:00:08.588 [Pipeline] sh 00:00:08.875 + sudo git config --global --replace-all safe.directory '*' 00:00:08.953 [Pipeline] httpRequest 00:00:09.371 [Pipeline] echo 00:00:09.373 Sorcerer 10.211.164.101 is alive 00:00:09.382 [Pipeline] retry 00:00:09.384 [Pipeline] { 00:00:09.398 [Pipeline] httpRequest 00:00:09.403 HttpMethod: GET 00:00:09.403 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.404 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.418 Response Code: HTTP/1.1 200 OK 00:00:09.419 Success: Status code 200 is in the accepted range: 200,404 00:00:09.419 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.779 [Pipeline] } 00:00:11.796 [Pipeline] // retry 00:00:11.804 [Pipeline] sh 00:00:12.093 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.112 [Pipeline] httpRequest 00:00:12.936 [Pipeline] echo 00:00:12.938 Sorcerer 10.211.164.101 is alive 00:00:12.948 [Pipeline] retry 00:00:12.950 [Pipeline] { 00:00:12.965 [Pipeline] httpRequest 00:00:12.970 HttpMethod: GET 00:00:12.970 URL: http://10.211.164.101/packages/spdk_159fecd99dff89f07965ab0b8ab77b2bbf487c65.tar.gz 00:00:12.972 Sending request to url: http://10.211.164.101/packages/spdk_159fecd99dff89f07965ab0b8ab77b2bbf487c65.tar.gz 00:00:12.991 Response Code: HTTP/1.1 200 OK 00:00:12.991 Success: Status code 200 is in the accepted range: 200,404 00:00:12.991 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_159fecd99dff89f07965ab0b8ab77b2bbf487c65.tar.gz 00:01:04.742 [Pipeline] } 00:01:04.761 [Pipeline] // retry 00:01:04.771 [Pipeline] sh 00:01:05.067 + tar --no-same-owner -xf spdk_159fecd99dff89f07965ab0b8ab77b2bbf487c65.tar.gz 00:01:08.380 [Pipeline] sh 00:01:08.670 + git -C spdk log --oneline -n5 00:01:08.670 159fecd99 accel: Fix comments for spdk_accel_*_dif_verify_copy() 00:01:08.670 6a3a0b5fb bdev: Clean up duplicated asserts in bdev_io_pull_data() 00:01:08.670 32c6c4b3a bdev: Rename _bdev_memory_domain_io_get_buf() by bdev_io_get_bounce_buf() 00:01:08.670 1e85affe1 bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:01:08.670 55320b5d9 bdev: Factor out checking bounce buffer necessity into helper function 00:01:08.681 [Pipeline] } 00:01:08.695 [Pipeline] // stage 00:01:08.705 [Pipeline] stage 00:01:08.707 [Pipeline] { (Prepare) 00:01:08.725 [Pipeline] writeFile 00:01:08.740 [Pipeline] sh 00:01:09.029 + logger -p user.info -t JENKINS-CI 00:01:09.043 [Pipeline] sh 00:01:09.333 + logger -p user.info -t JENKINS-CI 00:01:09.345 [Pipeline] sh 00:01:09.634 + cat autorun-spdk.conf 00:01:09.634 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.634 SPDK_TEST_NVMF=1 00:01:09.634 SPDK_TEST_NVME_CLI=1 00:01:09.634 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.634 SPDK_TEST_NVMF_NICS=e810 00:01:09.634 SPDK_TEST_VFIOUSER=1 00:01:09.634 SPDK_RUN_UBSAN=1 00:01:09.634 NET_TYPE=phy 00:01:09.642 RUN_NIGHTLY=0 00:01:09.647 [Pipeline] readFile 00:01:09.670 [Pipeline] withEnv 00:01:09.673 [Pipeline] { 00:01:09.684 [Pipeline] sh 00:01:09.973 + set -ex 00:01:09.973 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:09.973 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:09.973 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.973 ++ SPDK_TEST_NVMF=1 00:01:09.973 ++ SPDK_TEST_NVME_CLI=1 00:01:09.973 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.973 ++ SPDK_TEST_NVMF_NICS=e810 00:01:09.973 ++ SPDK_TEST_VFIOUSER=1 00:01:09.973 ++ SPDK_RUN_UBSAN=1 00:01:09.973 ++ NET_TYPE=phy 00:01:09.973 ++ RUN_NIGHTLY=0 00:01:09.973 + case $SPDK_TEST_NVMF_NICS in 00:01:09.973 + DRIVERS=ice 00:01:09.973 + [[ tcp == \r\d\m\a ]] 00:01:09.973 + [[ -n ice ]] 00:01:09.973 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:09.973 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:09.973 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:09.973 rmmod: ERROR: Module irdma is not currently loaded 00:01:09.973 rmmod: ERROR: Module i40iw is not currently loaded 00:01:09.973 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:09.973 + true 00:01:09.973 + for D in $DRIVERS 00:01:09.973 + sudo modprobe ice 00:01:09.973 + exit 0 00:01:09.983 [Pipeline] } 00:01:09.998 [Pipeline] // withEnv 00:01:10.003 [Pipeline] } 00:01:10.017 [Pipeline] // stage 00:01:10.030 [Pipeline] catchError 00:01:10.032 [Pipeline] { 00:01:10.052 [Pipeline] timeout 00:01:10.052 Timeout set to expire in 1 hr 0 min 00:01:10.057 [Pipeline] { 00:01:10.078 [Pipeline] stage 00:01:10.081 [Pipeline] { (Tests) 00:01:10.103 [Pipeline] sh 00:01:10.387 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.387 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.387 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.387 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:10.387 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:10.387 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:10.387 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:10.387 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:10.387 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:10.387 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:10.387 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:10.387 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.387 + source /etc/os-release 00:01:10.387 ++ NAME='Fedora Linux' 00:01:10.387 ++ VERSION='39 (Cloud Edition)' 00:01:10.387 ++ ID=fedora 00:01:10.387 ++ VERSION_ID=39 00:01:10.387 ++ VERSION_CODENAME= 00:01:10.387 ++ PLATFORM_ID=platform:f39 00:01:10.387 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:10.387 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:10.387 ++ LOGO=fedora-logo-icon 00:01:10.387 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:10.387 ++ HOME_URL=https://fedoraproject.org/ 00:01:10.387 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:10.387 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:10.387 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:10.387 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:10.387 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:10.387 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:10.387 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:10.387 ++ SUPPORT_END=2024-11-12 00:01:10.387 ++ VARIANT='Cloud Edition' 00:01:10.387 ++ VARIANT_ID=cloud 00:01:10.387 + uname -a 00:01:10.387 Linux spdk-cyp-13 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:10.387 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:13.688 Hugepages 00:01:13.688 node hugesize free / total 00:01:13.688 node0 1048576kB 0 / 0 00:01:13.688 node0 2048kB 0 / 0 00:01:13.688 node1 1048576kB 0 / 0 00:01:13.688 node1 2048kB 0 / 0 00:01:13.688 00:01:13.688 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:13.688 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:13.688 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:13.688 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:13.688 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:13.688 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:13.688 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:13.688 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:13.688 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:13.688 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:13.688 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:13.688 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:13.688 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:13.688 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:13.688 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:13.688 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:13.688 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:13.688 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:13.688 + rm -f /tmp/spdk-ld-path 00:01:13.688 + source autorun-spdk.conf 00:01:13.688 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.688 ++ SPDK_TEST_NVMF=1 00:01:13.688 ++ SPDK_TEST_NVME_CLI=1 00:01:13.688 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.688 ++ SPDK_TEST_NVMF_NICS=e810 00:01:13.688 ++ SPDK_TEST_VFIOUSER=1 00:01:13.688 ++ SPDK_RUN_UBSAN=1 00:01:13.688 ++ NET_TYPE=phy 00:01:13.688 ++ RUN_NIGHTLY=0 00:01:13.688 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:13.688 + [[ -n '' ]] 00:01:13.688 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.688 + for M in /var/spdk/build-*-manifest.txt 00:01:13.688 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:13.688 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.688 + for M in /var/spdk/build-*-manifest.txt 00:01:13.688 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:13.688 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.689 + for M in /var/spdk/build-*-manifest.txt 00:01:13.689 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:13.689 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.689 ++ uname 00:01:13.689 + [[ Linux == \L\i\n\u\x ]] 00:01:13.689 + sudo dmesg -T 00:01:13.689 + sudo dmesg --clear 00:01:13.689 + dmesg_pid=2079548 00:01:13.689 + [[ Fedora Linux == FreeBSD ]] 00:01:13.689 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.689 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.689 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:13.689 + sudo dmesg -Tw 00:01:13.689 + [[ -x /usr/src/fio-static/fio ]] 00:01:13.689 + export FIO_BIN=/usr/src/fio-static/fio 00:01:13.689 + FIO_BIN=/usr/src/fio-static/fio 00:01:13.689 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:13.689 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:13.689 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:13.689 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.689 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.689 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:13.689 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.689 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.689 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.950 13:42:59 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:13.950 13:42:59 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.950 13:42:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.950 13:42:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:13.950 13:42:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:13.950 13:42:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.950 13:42:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:13.950 13:42:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:13.950 13:42:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:13.950 13:42:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:13.950 13:42:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:13.950 13:42:59 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:13.950 13:42:59 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.950 13:43:00 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:13.950 13:43:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:13.950 13:43:00 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:13.950 13:43:00 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:13.950 13:43:00 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:13.950 13:43:00 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:13.950 13:43:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.950 13:43:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.950 13:43:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.950 13:43:00 -- paths/export.sh@5 -- $ export PATH 00:01:13.950 13:43:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.950 13:43:00 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:13.950 13:43:00 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:13.950 13:43:00 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730896980.XXXXXX 00:01:13.950 13:43:00 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730896980.W1VwT8 00:01:13.950 13:43:00 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:13.950 13:43:00 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:13.950 13:43:00 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:13.950 13:43:00 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:13.950 13:43:00 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:13.950 13:43:00 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:13.950 13:43:00 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:13.950 13:43:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.950 13:43:00 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:13.950 13:43:00 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:13.950 13:43:00 -- pm/common@17 -- $ local monitor 00:01:13.950 13:43:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.950 13:43:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.950 13:43:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.950 13:43:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.950 13:43:00 -- pm/common@21 -- $ date +%s 00:01:13.950 13:43:00 -- pm/common@25 -- $ sleep 1 00:01:13.950 13:43:00 -- pm/common@21 -- $ date +%s 00:01:13.950 13:43:00 -- pm/common@21 -- $ date +%s 00:01:13.950 13:43:00 -- pm/common@21 -- $ date +%s 00:01:13.950 13:43:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730896980 00:01:13.950 13:43:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730896980 00:01:13.950 13:43:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730896980 00:01:13.950 13:43:00 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730896980 00:01:13.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730896980_collect-cpu-load.pm.log 00:01:13.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730896980_collect-vmstat.pm.log 00:01:13.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730896980_collect-cpu-temp.pm.log 00:01:13.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730896980_collect-bmc-pm.bmc.pm.log 00:01:14.893 13:43:01 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:14.893 13:43:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:14.893 13:43:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:14.893 13:43:01 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.893 13:43:01 -- spdk/autobuild.sh@16 -- $ date -u 00:01:14.893 Wed Nov 6 12:43:01 PM UTC 2024 00:01:14.893 13:43:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:14.893 v25.01-pre-185-g159fecd99 00:01:14.893 13:43:01 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:14.893 13:43:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:14.893 13:43:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:14.893 13:43:01 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:14.893 13:43:01 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:14.893 13:43:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.153 ************************************ 00:01:15.153 START TEST ubsan 00:01:15.153 ************************************ 00:01:15.153 13:43:01 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:15.153 using ubsan 00:01:15.153 00:01:15.153 real 0m0.001s 00:01:15.153 user 0m0.001s 00:01:15.153 sys 0m0.000s 00:01:15.153 13:43:01 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:15.153 13:43:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:15.154 ************************************ 00:01:15.154 END TEST ubsan 00:01:15.154 ************************************ 00:01:15.154 13:43:01 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:15.154 13:43:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:15.154 13:43:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:15.154 13:43:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:15.154 13:43:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:15.154 13:43:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:15.154 13:43:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:15.154 13:43:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:15.154 13:43:01 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:15.154 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:15.154 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:15.726 Using 'verbs' RDMA provider 00:01:31.579 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:43.839 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:44.410 Creating mk/config.mk...done. 00:01:44.410 Creating mk/cc.flags.mk...done. 00:01:44.410 Type 'make' to build. 00:01:44.410 13:43:30 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:44.410 13:43:30 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:44.410 13:43:30 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:44.410 13:43:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.410 ************************************ 00:01:44.410 START TEST make 00:01:44.410 ************************************ 00:01:44.410 13:43:30 make -- common/autotest_common.sh@1127 -- $ make -j144 00:01:44.981 make[1]: Nothing to be done for 'all'. 00:01:46.365 The Meson build system 00:01:46.365 Version: 1.5.0 00:01:46.365 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:46.365 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:46.365 Build type: native build 00:01:46.365 Project name: libvfio-user 00:01:46.365 Project version: 0.0.1 00:01:46.365 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:46.365 C linker for the host machine: cc ld.bfd 2.40-14 00:01:46.365 Host machine cpu family: x86_64 00:01:46.365 Host machine cpu: x86_64 00:01:46.365 Run-time dependency threads found: YES 00:01:46.365 Library dl found: YES 00:01:46.365 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:46.365 Run-time dependency json-c found: YES 0.17 00:01:46.365 Run-time dependency cmocka found: YES 1.1.7 00:01:46.365 Program pytest-3 found: NO 00:01:46.365 Program flake8 found: NO 00:01:46.365 Program misspell-fixer found: NO 00:01:46.365 Program restructuredtext-lint found: NO 00:01:46.365 Program valgrind found: YES (/usr/bin/valgrind) 00:01:46.365 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:46.365 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:46.365 Compiler for C supports arguments -Wwrite-strings: YES 00:01:46.365 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:46.365 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:46.365 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:46.365 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:46.365 Build targets in project: 8 00:01:46.365 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:46.365 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:46.365 00:01:46.365 libvfio-user 0.0.1 00:01:46.365 00:01:46.365 User defined options 00:01:46.365 buildtype : debug 00:01:46.365 default_library: shared 00:01:46.365 libdir : /usr/local/lib 00:01:46.365 00:01:46.365 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.625 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:46.884 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:46.884 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:46.884 [3/37] Compiling C object samples/null.p/null.c.o 00:01:46.884 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:46.884 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:46.884 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:46.884 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:46.884 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:46.884 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:46.884 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:46.884 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:46.884 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:46.884 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:46.884 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:46.884 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:46.884 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:46.884 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:46.884 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:46.884 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:46.884 [20/37] Compiling C object samples/server.p/server.c.o 00:01:46.884 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:46.884 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:46.884 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:46.884 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:46.884 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:46.884 [26/37] Compiling C object samples/client.p/client.c.o 00:01:46.884 [27/37] Linking target samples/client 00:01:46.884 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:47.143 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:47.143 [30/37] Linking target test/unit_tests 00:01:47.143 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:47.143 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:47.143 [33/37] Linking target samples/lspci 00:01:47.143 [34/37] Linking target samples/server 00:01:47.143 [35/37] Linking target samples/null 00:01:47.143 [36/37] Linking target samples/gpio-pci-idio-16 00:01:47.143 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:47.404 INFO: autodetecting backend as ninja 00:01:47.404 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:47.404 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:47.664 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:47.664 ninja: no work to do. 00:01:54.253 The Meson build system 00:01:54.253 Version: 1.5.0 00:01:54.253 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:54.253 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:54.253 Build type: native build 00:01:54.253 Program cat found: YES (/usr/bin/cat) 00:01:54.253 Project name: DPDK 00:01:54.253 Project version: 24.03.0 00:01:54.253 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:54.253 C linker for the host machine: cc ld.bfd 2.40-14 00:01:54.253 Host machine cpu family: x86_64 00:01:54.253 Host machine cpu: x86_64 00:01:54.253 Message: ## Building in Developer Mode ## 00:01:54.253 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:54.253 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:54.253 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:54.253 Program python3 found: YES (/usr/bin/python3) 00:01:54.253 Program cat found: YES (/usr/bin/cat) 00:01:54.253 Compiler for C supports arguments -march=native: YES 00:01:54.253 Checking for size of "void *" : 8 00:01:54.253 Checking for size of "void *" : 8 (cached) 00:01:54.253 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:54.253 Library m found: YES 00:01:54.253 Library numa found: YES 00:01:54.253 Has header "numaif.h" : YES 00:01:54.253 Library fdt found: NO 00:01:54.253 Library execinfo found: NO 00:01:54.253 Has header "execinfo.h" : YES 00:01:54.253 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:54.253 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:54.253 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:54.253 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:54.253 Run-time dependency openssl found: YES 3.1.1 00:01:54.253 Run-time dependency libpcap found: YES 1.10.4 00:01:54.253 Has header "pcap.h" with dependency libpcap: YES 00:01:54.253 Compiler for C supports arguments -Wcast-qual: YES 00:01:54.253 Compiler for C supports arguments -Wdeprecated: YES 00:01:54.253 Compiler for C supports arguments -Wformat: YES 00:01:54.253 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:54.253 Compiler for C supports arguments -Wformat-security: NO 00:01:54.253 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:54.253 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:54.253 Compiler for C supports arguments -Wnested-externs: YES 00:01:54.253 Compiler for C supports arguments -Wold-style-definition: YES 00:01:54.253 Compiler for C supports arguments -Wpointer-arith: YES 00:01:54.253 Compiler for C supports arguments -Wsign-compare: YES 00:01:54.253 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:54.253 Compiler for C supports arguments -Wundef: YES 00:01:54.253 Compiler for C supports arguments -Wwrite-strings: YES 00:01:54.253 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:54.253 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:54.253 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:54.253 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:54.253 Program objdump found: YES (/usr/bin/objdump) 00:01:54.253 Compiler for C supports arguments -mavx512f: YES 00:01:54.253 Checking if "AVX512 checking" compiles: YES 00:01:54.253 Fetching value of define "__SSE4_2__" : 1 00:01:54.253 Fetching value of define "__AES__" : 1 00:01:54.253 Fetching value of define "__AVX__" : 1 00:01:54.253 Fetching value of define "__AVX2__" : 1 00:01:54.253 Fetching value of define "__AVX512BW__" : 1 00:01:54.253 Fetching value of define "__AVX512CD__" : 1 00:01:54.253 Fetching value of define "__AVX512DQ__" : 1 00:01:54.253 Fetching value of define "__AVX512F__" : 1 00:01:54.253 Fetching value of define "__AVX512VL__" : 1 00:01:54.253 Fetching value of define "__PCLMUL__" : 1 00:01:54.253 Fetching value of define "__RDRND__" : 1 00:01:54.253 Fetching value of define "__RDSEED__" : 1 00:01:54.253 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:54.253 Fetching value of define "__znver1__" : (undefined) 00:01:54.253 Fetching value of define "__znver2__" : (undefined) 00:01:54.254 Fetching value of define "__znver3__" : (undefined) 00:01:54.254 Fetching value of define "__znver4__" : (undefined) 00:01:54.254 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:54.254 Message: lib/log: Defining dependency "log" 00:01:54.254 Message: lib/kvargs: Defining dependency "kvargs" 00:01:54.254 Message: lib/telemetry: Defining dependency "telemetry" 00:01:54.254 Checking for function "getentropy" : NO 00:01:54.254 Message: lib/eal: Defining dependency "eal" 00:01:54.254 Message: lib/ring: Defining dependency "ring" 00:01:54.254 Message: lib/rcu: Defining dependency "rcu" 00:01:54.254 Message: lib/mempool: Defining dependency "mempool" 00:01:54.254 Message: lib/mbuf: Defining dependency "mbuf" 00:01:54.254 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:54.254 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:54.254 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:54.254 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:54.254 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:54.254 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:54.254 Compiler for C supports arguments -mpclmul: YES 00:01:54.254 Compiler for C supports arguments -maes: YES 00:01:54.254 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:54.254 Compiler for C supports arguments -mavx512bw: YES 00:01:54.254 Compiler for C supports arguments -mavx512dq: YES 00:01:54.254 Compiler for C supports arguments -mavx512vl: YES 00:01:54.254 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:54.254 Compiler for C supports arguments -mavx2: YES 00:01:54.254 Compiler for C supports arguments -mavx: YES 00:01:54.254 Message: lib/net: Defining dependency "net" 00:01:54.254 Message: lib/meter: Defining dependency "meter" 00:01:54.254 Message: lib/ethdev: Defining dependency "ethdev" 00:01:54.254 Message: lib/pci: Defining dependency "pci" 00:01:54.254 Message: lib/cmdline: Defining dependency "cmdline" 00:01:54.254 Message: lib/hash: Defining dependency "hash" 00:01:54.254 Message: lib/timer: Defining dependency "timer" 00:01:54.254 Message: lib/compressdev: Defining dependency "compressdev" 00:01:54.254 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:54.254 Message: lib/dmadev: Defining dependency "dmadev" 00:01:54.254 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:54.254 Message: lib/power: Defining dependency "power" 00:01:54.254 Message: lib/reorder: Defining dependency "reorder" 00:01:54.254 Message: lib/security: Defining dependency "security" 00:01:54.254 Has header "linux/userfaultfd.h" : YES 00:01:54.254 Has header "linux/vduse.h" : YES 00:01:54.254 Message: lib/vhost: Defining dependency "vhost" 00:01:54.254 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:54.254 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:54.254 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:54.254 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:54.254 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:54.254 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:54.254 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:54.254 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:54.254 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:54.254 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:54.254 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:54.254 Configuring doxy-api-html.conf using configuration 00:01:54.254 Configuring doxy-api-man.conf using configuration 00:01:54.254 Program mandb found: YES (/usr/bin/mandb) 00:01:54.254 Program sphinx-build found: NO 00:01:54.254 Configuring rte_build_config.h using configuration 00:01:54.254 Message: 00:01:54.254 ================= 00:01:54.254 Applications Enabled 00:01:54.254 ================= 00:01:54.254 00:01:54.254 apps: 00:01:54.254 00:01:54.254 00:01:54.254 Message: 00:01:54.254 ================= 00:01:54.254 Libraries Enabled 00:01:54.254 ================= 00:01:54.254 00:01:54.254 libs: 00:01:54.254 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:54.254 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:54.254 cryptodev, dmadev, power, reorder, security, vhost, 00:01:54.254 00:01:54.254 Message: 00:01:54.254 =============== 00:01:54.254 Drivers Enabled 00:01:54.254 =============== 00:01:54.254 00:01:54.254 common: 00:01:54.254 00:01:54.254 bus: 00:01:54.254 pci, vdev, 00:01:54.254 mempool: 00:01:54.254 ring, 00:01:54.254 dma: 00:01:54.254 00:01:54.254 net: 00:01:54.254 00:01:54.254 crypto: 00:01:54.254 00:01:54.254 compress: 00:01:54.254 00:01:54.254 vdpa: 00:01:54.254 00:01:54.254 00:01:54.254 Message: 00:01:54.254 ================= 00:01:54.254 Content Skipped 00:01:54.254 ================= 00:01:54.254 00:01:54.254 apps: 00:01:54.254 dumpcap: explicitly disabled via build config 00:01:54.254 graph: explicitly disabled via build config 00:01:54.254 pdump: explicitly disabled via build config 00:01:54.254 proc-info: explicitly disabled via build config 00:01:54.254 test-acl: explicitly disabled via build config 00:01:54.254 test-bbdev: explicitly disabled via build config 00:01:54.254 test-cmdline: explicitly disabled via build config 00:01:54.254 test-compress-perf: explicitly disabled via build config 00:01:54.254 test-crypto-perf: explicitly disabled via build config 00:01:54.254 test-dma-perf: explicitly disabled via build config 00:01:54.254 test-eventdev: explicitly disabled via build config 00:01:54.254 test-fib: explicitly disabled via build config 00:01:54.254 test-flow-perf: explicitly disabled via build config 00:01:54.254 test-gpudev: explicitly disabled via build config 00:01:54.254 test-mldev: explicitly disabled via build config 00:01:54.254 test-pipeline: explicitly disabled via build config 00:01:54.254 test-pmd: explicitly disabled via build config 00:01:54.254 test-regex: explicitly disabled via build config 00:01:54.254 test-sad: explicitly disabled via build config 00:01:54.254 test-security-perf: explicitly disabled via build config 00:01:54.254 00:01:54.254 libs: 00:01:54.254 argparse: explicitly disabled via build config 00:01:54.254 metrics: explicitly disabled via build config 00:01:54.254 acl: explicitly disabled via build config 00:01:54.254 bbdev: explicitly disabled via build config 00:01:54.254 bitratestats: explicitly disabled via build config 00:01:54.254 bpf: explicitly disabled via build config 00:01:54.254 cfgfile: explicitly disabled via build config 00:01:54.254 distributor: explicitly disabled via build config 00:01:54.254 efd: explicitly disabled via build config 00:01:54.254 eventdev: explicitly disabled via build config 00:01:54.254 dispatcher: explicitly disabled via build config 00:01:54.254 gpudev: explicitly disabled via build config 00:01:54.254 gro: explicitly disabled via build config 00:01:54.254 gso: explicitly disabled via build config 00:01:54.254 ip_frag: explicitly disabled via build config 00:01:54.254 jobstats: explicitly disabled via build config 00:01:54.254 latencystats: explicitly disabled via build config 00:01:54.254 lpm: explicitly disabled via build config 00:01:54.254 member: explicitly disabled via build config 00:01:54.254 pcapng: explicitly disabled via build config 00:01:54.254 rawdev: explicitly disabled via build config 00:01:54.254 regexdev: explicitly disabled via build config 00:01:54.254 mldev: explicitly disabled via build config 00:01:54.254 rib: explicitly disabled via build config 00:01:54.254 sched: explicitly disabled via build config 00:01:54.254 stack: explicitly disabled via build config 00:01:54.254 ipsec: explicitly disabled via build config 00:01:54.254 pdcp: explicitly disabled via build config 00:01:54.254 fib: explicitly disabled via build config 00:01:54.254 port: explicitly disabled via build config 00:01:54.254 pdump: explicitly disabled via build config 00:01:54.254 table: explicitly disabled via build config 00:01:54.254 pipeline: explicitly disabled via build config 00:01:54.254 graph: explicitly disabled via build config 00:01:54.254 node: explicitly disabled via build config 00:01:54.254 00:01:54.254 drivers: 00:01:54.254 common/cpt: not in enabled drivers build config 00:01:54.254 common/dpaax: not in enabled drivers build config 00:01:54.254 common/iavf: not in enabled drivers build config 00:01:54.255 common/idpf: not in enabled drivers build config 00:01:54.255 common/ionic: not in enabled drivers build config 00:01:54.255 common/mvep: not in enabled drivers build config 00:01:54.255 common/octeontx: not in enabled drivers build config 00:01:54.255 bus/auxiliary: not in enabled drivers build config 00:01:54.255 bus/cdx: not in enabled drivers build config 00:01:54.255 bus/dpaa: not in enabled drivers build config 00:01:54.255 bus/fslmc: not in enabled drivers build config 00:01:54.255 bus/ifpga: not in enabled drivers build config 00:01:54.255 bus/platform: not in enabled drivers build config 00:01:54.255 bus/uacce: not in enabled drivers build config 00:01:54.255 bus/vmbus: not in enabled drivers build config 00:01:54.255 common/cnxk: not in enabled drivers build config 00:01:54.255 common/mlx5: not in enabled drivers build config 00:01:54.255 common/nfp: not in enabled drivers build config 00:01:54.255 common/nitrox: not in enabled drivers build config 00:01:54.255 common/qat: not in enabled drivers build config 00:01:54.255 common/sfc_efx: not in enabled drivers build config 00:01:54.255 mempool/bucket: not in enabled drivers build config 00:01:54.255 mempool/cnxk: not in enabled drivers build config 00:01:54.255 mempool/dpaa: not in enabled drivers build config 00:01:54.255 mempool/dpaa2: not in enabled drivers build config 00:01:54.255 mempool/octeontx: not in enabled drivers build config 00:01:54.255 mempool/stack: not in enabled drivers build config 00:01:54.255 dma/cnxk: not in enabled drivers build config 00:01:54.255 dma/dpaa: not in enabled drivers build config 00:01:54.255 dma/dpaa2: not in enabled drivers build config 00:01:54.255 dma/hisilicon: not in enabled drivers build config 00:01:54.255 dma/idxd: not in enabled drivers build config 00:01:54.255 dma/ioat: not in enabled drivers build config 00:01:54.255 dma/skeleton: not in enabled drivers build config 00:01:54.255 net/af_packet: not in enabled drivers build config 00:01:54.255 net/af_xdp: not in enabled drivers build config 00:01:54.255 net/ark: not in enabled drivers build config 00:01:54.255 net/atlantic: not in enabled drivers build config 00:01:54.255 net/avp: not in enabled drivers build config 00:01:54.255 net/axgbe: not in enabled drivers build config 00:01:54.255 net/bnx2x: not in enabled drivers build config 00:01:54.255 net/bnxt: not in enabled drivers build config 00:01:54.255 net/bonding: not in enabled drivers build config 00:01:54.255 net/cnxk: not in enabled drivers build config 00:01:54.255 net/cpfl: not in enabled drivers build config 00:01:54.255 net/cxgbe: not in enabled drivers build config 00:01:54.255 net/dpaa: not in enabled drivers build config 00:01:54.255 net/dpaa2: not in enabled drivers build config 00:01:54.255 net/e1000: not in enabled drivers build config 00:01:54.255 net/ena: not in enabled drivers build config 00:01:54.255 net/enetc: not in enabled drivers build config 00:01:54.255 net/enetfec: not in enabled drivers build config 00:01:54.255 net/enic: not in enabled drivers build config 00:01:54.255 net/failsafe: not in enabled drivers build config 00:01:54.255 net/fm10k: not in enabled drivers build config 00:01:54.255 net/gve: not in enabled drivers build config 00:01:54.255 net/hinic: not in enabled drivers build config 00:01:54.255 net/hns3: not in enabled drivers build config 00:01:54.255 net/i40e: not in enabled drivers build config 00:01:54.255 net/iavf: not in enabled drivers build config 00:01:54.255 net/ice: not in enabled drivers build config 00:01:54.255 net/idpf: not in enabled drivers build config 00:01:54.255 net/igc: not in enabled drivers build config 00:01:54.255 net/ionic: not in enabled drivers build config 00:01:54.255 net/ipn3ke: not in enabled drivers build config 00:01:54.255 net/ixgbe: not in enabled drivers build config 00:01:54.255 net/mana: not in enabled drivers build config 00:01:54.255 net/memif: not in enabled drivers build config 00:01:54.255 net/mlx4: not in enabled drivers build config 00:01:54.255 net/mlx5: not in enabled drivers build config 00:01:54.255 net/mvneta: not in enabled drivers build config 00:01:54.255 net/mvpp2: not in enabled drivers build config 00:01:54.255 net/netvsc: not in enabled drivers build config 00:01:54.255 net/nfb: not in enabled drivers build config 00:01:54.255 net/nfp: not in enabled drivers build config 00:01:54.255 net/ngbe: not in enabled drivers build config 00:01:54.255 net/null: not in enabled drivers build config 00:01:54.255 net/octeontx: not in enabled drivers build config 00:01:54.255 net/octeon_ep: not in enabled drivers build config 00:01:54.255 net/pcap: not in enabled drivers build config 00:01:54.255 net/pfe: not in enabled drivers build config 00:01:54.255 net/qede: not in enabled drivers build config 00:01:54.255 net/ring: not in enabled drivers build config 00:01:54.255 net/sfc: not in enabled drivers build config 00:01:54.255 net/softnic: not in enabled drivers build config 00:01:54.255 net/tap: not in enabled drivers build config 00:01:54.255 net/thunderx: not in enabled drivers build config 00:01:54.255 net/txgbe: not in enabled drivers build config 00:01:54.255 net/vdev_netvsc: not in enabled drivers build config 00:01:54.255 net/vhost: not in enabled drivers build config 00:01:54.255 net/virtio: not in enabled drivers build config 00:01:54.255 net/vmxnet3: not in enabled drivers build config 00:01:54.255 raw/*: missing internal dependency, "rawdev" 00:01:54.255 crypto/armv8: not in enabled drivers build config 00:01:54.255 crypto/bcmfs: not in enabled drivers build config 00:01:54.255 crypto/caam_jr: not in enabled drivers build config 00:01:54.255 crypto/ccp: not in enabled drivers build config 00:01:54.255 crypto/cnxk: not in enabled drivers build config 00:01:54.255 crypto/dpaa_sec: not in enabled drivers build config 00:01:54.255 crypto/dpaa2_sec: not in enabled drivers build config 00:01:54.255 crypto/ipsec_mb: not in enabled drivers build config 00:01:54.255 crypto/mlx5: not in enabled drivers build config 00:01:54.255 crypto/mvsam: not in enabled drivers build config 00:01:54.255 crypto/nitrox: not in enabled drivers build config 00:01:54.255 crypto/null: not in enabled drivers build config 00:01:54.255 crypto/octeontx: not in enabled drivers build config 00:01:54.255 crypto/openssl: not in enabled drivers build config 00:01:54.255 crypto/scheduler: not in enabled drivers build config 00:01:54.255 crypto/uadk: not in enabled drivers build config 00:01:54.255 crypto/virtio: not in enabled drivers build config 00:01:54.255 compress/isal: not in enabled drivers build config 00:01:54.255 compress/mlx5: not in enabled drivers build config 00:01:54.255 compress/nitrox: not in enabled drivers build config 00:01:54.255 compress/octeontx: not in enabled drivers build config 00:01:54.255 compress/zlib: not in enabled drivers build config 00:01:54.255 regex/*: missing internal dependency, "regexdev" 00:01:54.255 ml/*: missing internal dependency, "mldev" 00:01:54.255 vdpa/ifc: not in enabled drivers build config 00:01:54.255 vdpa/mlx5: not in enabled drivers build config 00:01:54.255 vdpa/nfp: not in enabled drivers build config 00:01:54.255 vdpa/sfc: not in enabled drivers build config 00:01:54.255 event/*: missing internal dependency, "eventdev" 00:01:54.255 baseband/*: missing internal dependency, "bbdev" 00:01:54.255 gpu/*: missing internal dependency, "gpudev" 00:01:54.255 00:01:54.255 00:01:54.255 Build targets in project: 84 00:01:54.255 00:01:54.255 DPDK 24.03.0 00:01:54.255 00:01:54.255 User defined options 00:01:54.255 buildtype : debug 00:01:54.255 default_library : shared 00:01:54.255 libdir : lib 00:01:54.255 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:54.255 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:54.255 c_link_args : 00:01:54.255 cpu_instruction_set: native 00:01:54.255 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:54.255 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:54.256 enable_docs : false 00:01:54.256 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:54.256 enable_kmods : false 00:01:54.256 max_lcores : 128 00:01:54.256 tests : false 00:01:54.256 00:01:54.256 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:54.256 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:54.256 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:54.256 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:54.256 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:54.256 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:54.256 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:54.256 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:54.256 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:54.256 [8/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:54.256 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:54.256 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:54.256 [11/267] Linking static target lib/librte_kvargs.a 00:01:54.256 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:54.256 [13/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:54.256 [14/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:54.256 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:54.256 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:54.256 [17/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:54.256 [18/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:54.256 [19/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:54.256 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:54.256 [21/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:54.256 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:54.256 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:54.256 [24/267] Linking static target lib/librte_log.a 00:01:54.256 [25/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:54.256 [26/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:54.256 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:54.256 [28/267] Linking static target lib/librte_pci.a 00:01:54.256 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:54.256 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:54.515 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:54.515 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:54.515 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:54.515 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:54.515 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:54.515 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:54.515 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:54.515 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:54.515 [39/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:54.775 [40/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:54.775 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:54.775 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:54.775 [43/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:54.775 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:54.775 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:54.775 [46/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:54.775 [47/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.775 [48/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.775 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:54.775 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:54.775 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:54.775 [52/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:54.775 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:54.775 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:54.775 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:54.775 [56/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:54.775 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:54.775 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:54.775 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:54.775 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:54.775 [61/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:54.775 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:54.775 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:54.776 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:54.776 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:54.776 [66/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:54.776 [67/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:54.776 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:54.776 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:54.776 [70/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:54.776 [71/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:54.776 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:54.776 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:54.776 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:54.776 [75/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:54.776 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:54.776 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:54.776 [78/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:54.776 [79/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:54.776 [80/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:54.776 [81/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:54.776 [82/267] Linking static target lib/librte_telemetry.a 00:01:54.776 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:54.776 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:54.776 [85/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:54.776 [86/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:54.776 [87/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:54.776 [88/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:54.776 [89/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:54.776 [90/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:54.776 [91/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:54.776 [92/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:54.776 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:54.776 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:54.776 [95/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:54.776 [96/267] Linking static target lib/librte_ring.a 00:01:54.776 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:54.776 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:54.776 [99/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:54.776 [100/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:54.776 [101/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:54.776 [102/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:54.776 [103/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:54.776 [104/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:54.776 [105/267] Linking static target lib/librte_meter.a 00:01:54.776 [106/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:54.776 [107/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:54.776 [108/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:54.776 [109/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:54.776 [110/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:54.776 [111/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:54.776 [112/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:54.776 [113/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:54.776 [114/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:54.776 [115/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:54.776 [116/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:54.776 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:54.776 [118/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:54.776 [119/267] Linking static target lib/librte_timer.a 00:01:54.776 [120/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:54.776 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:54.776 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:54.776 [123/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:54.776 [124/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:54.776 [125/267] Linking static target lib/librte_cmdline.a 00:01:54.776 [126/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:54.776 [127/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:54.776 [128/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:54.776 [129/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:54.776 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:54.776 [131/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:54.776 [132/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:54.776 [133/267] Linking static target lib/librte_rcu.a 00:01:54.776 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:54.776 [135/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:54.776 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:54.776 [137/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:54.776 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:55.036 [139/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:55.036 [140/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:55.036 [141/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:55.036 [142/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:55.036 [143/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:55.036 [144/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:55.036 [145/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:55.036 [146/267] Linking static target lib/librte_dmadev.a 00:01:55.036 [147/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:55.036 [148/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:55.036 [149/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:55.036 [150/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.036 [151/267] Linking static target lib/librte_mempool.a 00:01:55.036 [152/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:55.036 [153/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:55.036 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:55.036 [155/267] Linking static target lib/librte_reorder.a 00:01:55.036 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:55.036 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:55.036 [158/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:55.036 [159/267] Linking static target lib/librte_compressdev.a 00:01:55.036 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:55.036 [161/267] Linking target lib/librte_log.so.24.1 00:01:55.036 [162/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:55.036 [163/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:55.036 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:55.036 [165/267] Linking static target lib/librte_power.a 00:01:55.037 [166/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:55.037 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:55.037 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:55.037 [169/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:55.037 [170/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:55.037 [171/267] Linking static target lib/librte_net.a 00:01:55.037 [172/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:55.037 [173/267] Linking static target lib/librte_mbuf.a 00:01:55.037 [174/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:55.037 [175/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:55.037 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:55.037 [177/267] Linking static target lib/librte_security.a 00:01:55.037 [178/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:55.037 [179/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:55.037 [180/267] Linking static target lib/librte_eal.a 00:01:55.037 [181/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:55.037 [182/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:55.037 [183/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.037 [184/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:55.037 [185/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:55.037 [186/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.037 [187/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.037 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:55.037 [189/267] Linking static target lib/librte_hash.a 00:01:55.037 [190/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:55.037 [191/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.037 [192/267] Linking target lib/librte_kvargs.so.24.1 00:01:55.037 [193/267] Linking static target drivers/librte_bus_vdev.a 00:01:55.037 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:55.296 [195/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:55.296 [196/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.296 [197/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.296 [198/267] Linking static target drivers/librte_bus_pci.a 00:01:55.296 [199/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:55.296 [200/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.296 [201/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.296 [202/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:55.296 [203/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.296 [204/267] Linking static target drivers/librte_mempool_ring.a 00:01:55.296 [205/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:55.296 [206/267] Linking static target lib/librte_cryptodev.a 00:01:55.296 [207/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.296 [208/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.296 [209/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.296 [210/267] Linking target lib/librte_telemetry.so.24.1 00:01:55.557 [211/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.557 [212/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:55.557 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:55.557 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.557 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.817 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.817 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.817 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:55.817 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:55.817 [220/267] Linking static target lib/librte_ethdev.a 00:01:56.077 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.077 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.077 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.077 [224/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.337 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.337 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.907 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:56.907 [228/267] Linking static target lib/librte_vhost.a 00:01:57.477 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.387 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.968 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.538 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.538 [233/267] Linking target lib/librte_eal.so.24.1 00:02:06.798 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:06.798 [235/267] Linking target lib/librte_ring.so.24.1 00:02:06.798 [236/267] Linking target lib/librte_pci.so.24.1 00:02:06.798 [237/267] Linking target lib/librte_timer.so.24.1 00:02:06.798 [238/267] Linking target lib/librte_meter.so.24.1 00:02:06.798 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:06.798 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:07.058 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:07.058 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:07.058 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:07.058 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:07.058 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:07.058 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:07.058 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:07.058 [248/267] Linking target lib/librte_mempool.so.24.1 00:02:07.058 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:07.058 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:07.319 [251/267] Linking target lib/librte_mbuf.so.24.1 00:02:07.319 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:07.319 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:07.319 [254/267] Linking target lib/librte_net.so.24.1 00:02:07.319 [255/267] Linking target lib/librte_compressdev.so.24.1 00:02:07.319 [256/267] Linking target lib/librte_reorder.so.24.1 00:02:07.319 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:07.579 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:07.579 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:07.579 [260/267] Linking target lib/librte_cmdline.so.24.1 00:02:07.579 [261/267] Linking target lib/librte_hash.so.24.1 00:02:07.579 [262/267] Linking target lib/librte_ethdev.so.24.1 00:02:07.579 [263/267] Linking target lib/librte_security.so.24.1 00:02:07.579 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:07.579 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:07.839 [266/267] Linking target lib/librte_power.so.24.1 00:02:07.839 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:07.839 INFO: autodetecting backend as ninja 00:02:07.839 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:12.040 CC lib/log/log.o 00:02:12.040 CC lib/log/log_flags.o 00:02:12.040 CC lib/log/log_deprecated.o 00:02:12.040 CC lib/ut_mock/mock.o 00:02:12.040 CC lib/ut/ut.o 00:02:12.040 LIB libspdk_log.a 00:02:12.040 LIB libspdk_ut.a 00:02:12.040 LIB libspdk_ut_mock.a 00:02:12.040 SO libspdk_log.so.7.1 00:02:12.040 SO libspdk_ut.so.2.0 00:02:12.040 SO libspdk_ut_mock.so.6.0 00:02:12.040 SYMLINK libspdk_ut.so 00:02:12.040 SYMLINK libspdk_log.so 00:02:12.040 SYMLINK libspdk_ut_mock.so 00:02:12.611 CC lib/util/base64.o 00:02:12.611 CC lib/ioat/ioat.o 00:02:12.611 CC lib/util/bit_array.o 00:02:12.611 CC lib/util/cpuset.o 00:02:12.611 CC lib/util/crc16.o 00:02:12.611 CC lib/util/crc32.o 00:02:12.611 CXX lib/trace_parser/trace.o 00:02:12.611 CC lib/util/crc32c.o 00:02:12.611 CC lib/util/crc32_ieee.o 00:02:12.611 CC lib/util/crc64.o 00:02:12.611 CC lib/dma/dma.o 00:02:12.611 CC lib/util/dif.o 00:02:12.611 CC lib/util/fd.o 00:02:12.611 CC lib/util/fd_group.o 00:02:12.611 CC lib/util/file.o 00:02:12.611 CC lib/util/hexlify.o 00:02:12.611 CC lib/util/iov.o 00:02:12.611 CC lib/util/math.o 00:02:12.611 CC lib/util/net.o 00:02:12.611 CC lib/util/pipe.o 00:02:12.611 CC lib/util/strerror_tls.o 00:02:12.611 CC lib/util/string.o 00:02:12.611 CC lib/util/uuid.o 00:02:12.611 CC lib/util/xor.o 00:02:12.611 CC lib/util/zipf.o 00:02:12.611 CC lib/util/md5.o 00:02:12.611 CC lib/vfio_user/host/vfio_user_pci.o 00:02:12.611 CC lib/vfio_user/host/vfio_user.o 00:02:12.611 LIB libspdk_dma.a 00:02:12.611 SO libspdk_dma.so.5.0 00:02:12.873 LIB libspdk_ioat.a 00:02:12.873 SYMLINK libspdk_dma.so 00:02:12.873 SO libspdk_ioat.so.7.0 00:02:12.873 SYMLINK libspdk_ioat.so 00:02:12.873 LIB libspdk_vfio_user.a 00:02:12.873 SO libspdk_vfio_user.so.5.0 00:02:13.133 LIB libspdk_util.a 00:02:13.133 SYMLINK libspdk_vfio_user.so 00:02:13.133 SO libspdk_util.so.10.1 00:02:13.133 SYMLINK libspdk_util.so 00:02:13.394 LIB libspdk_trace_parser.a 00:02:13.394 SO libspdk_trace_parser.so.6.0 00:02:13.394 SYMLINK libspdk_trace_parser.so 00:02:13.655 CC lib/conf/conf.o 00:02:13.655 CC lib/vmd/vmd.o 00:02:13.655 CC lib/vmd/led.o 00:02:13.655 CC lib/idxd/idxd.o 00:02:13.655 CC lib/idxd/idxd_user.o 00:02:13.655 CC lib/rdma_utils/rdma_utils.o 00:02:13.655 CC lib/json/json_parse.o 00:02:13.655 CC lib/idxd/idxd_kernel.o 00:02:13.655 CC lib/env_dpdk/env.o 00:02:13.655 CC lib/json/json_util.o 00:02:13.655 CC lib/env_dpdk/memory.o 00:02:13.655 CC lib/json/json_write.o 00:02:13.655 CC lib/env_dpdk/pci.o 00:02:13.655 CC lib/env_dpdk/init.o 00:02:13.655 CC lib/env_dpdk/threads.o 00:02:13.655 CC lib/env_dpdk/pci_ioat.o 00:02:13.655 CC lib/env_dpdk/pci_virtio.o 00:02:13.655 CC lib/env_dpdk/pci_vmd.o 00:02:13.655 CC lib/env_dpdk/pci_idxd.o 00:02:13.655 CC lib/env_dpdk/pci_event.o 00:02:13.655 CC lib/env_dpdk/sigbus_handler.o 00:02:13.655 CC lib/env_dpdk/pci_dpdk.o 00:02:13.655 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:13.655 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:13.915 LIB libspdk_conf.a 00:02:13.915 SO libspdk_conf.so.6.0 00:02:13.915 LIB libspdk_rdma_utils.a 00:02:13.915 LIB libspdk_json.a 00:02:13.916 SO libspdk_rdma_utils.so.1.0 00:02:13.916 SYMLINK libspdk_conf.so 00:02:13.916 SO libspdk_json.so.6.0 00:02:13.916 SYMLINK libspdk_rdma_utils.so 00:02:13.916 SYMLINK libspdk_json.so 00:02:14.176 LIB libspdk_idxd.a 00:02:14.176 LIB libspdk_vmd.a 00:02:14.176 SO libspdk_idxd.so.12.1 00:02:14.176 SO libspdk_vmd.so.6.0 00:02:14.176 SYMLINK libspdk_idxd.so 00:02:14.436 SYMLINK libspdk_vmd.so 00:02:14.436 CC lib/rdma_provider/common.o 00:02:14.436 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:14.436 CC lib/jsonrpc/jsonrpc_server.o 00:02:14.436 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:14.436 CC lib/jsonrpc/jsonrpc_client.o 00:02:14.436 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:14.696 LIB libspdk_rdma_provider.a 00:02:14.697 SO libspdk_rdma_provider.so.7.0 00:02:14.697 LIB libspdk_jsonrpc.a 00:02:14.697 SO libspdk_jsonrpc.so.6.0 00:02:14.697 SYMLINK libspdk_rdma_provider.so 00:02:14.697 SYMLINK libspdk_jsonrpc.so 00:02:15.019 LIB libspdk_env_dpdk.a 00:02:15.019 SO libspdk_env_dpdk.so.15.1 00:02:15.019 SYMLINK libspdk_env_dpdk.so 00:02:15.019 CC lib/rpc/rpc.o 00:02:15.279 LIB libspdk_rpc.a 00:02:15.279 SO libspdk_rpc.so.6.0 00:02:15.540 SYMLINK libspdk_rpc.so 00:02:15.800 CC lib/notify/notify.o 00:02:15.800 CC lib/notify/notify_rpc.o 00:02:15.800 CC lib/trace/trace.o 00:02:15.800 CC lib/trace/trace_flags.o 00:02:15.800 CC lib/trace/trace_rpc.o 00:02:15.800 CC lib/keyring/keyring.o 00:02:15.800 CC lib/keyring/keyring_rpc.o 00:02:16.060 LIB libspdk_notify.a 00:02:16.060 SO libspdk_notify.so.6.0 00:02:16.060 LIB libspdk_keyring.a 00:02:16.060 LIB libspdk_trace.a 00:02:16.060 SYMLINK libspdk_notify.so 00:02:16.060 SO libspdk_keyring.so.2.0 00:02:16.060 SO libspdk_trace.so.11.0 00:02:16.321 SYMLINK libspdk_keyring.so 00:02:16.321 SYMLINK libspdk_trace.so 00:02:16.582 CC lib/sock/sock.o 00:02:16.582 CC lib/sock/sock_rpc.o 00:02:16.582 CC lib/thread/thread.o 00:02:16.582 CC lib/thread/iobuf.o 00:02:16.843 LIB libspdk_sock.a 00:02:17.105 SO libspdk_sock.so.10.0 00:02:17.105 SYMLINK libspdk_sock.so 00:02:17.367 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:17.367 CC lib/nvme/nvme_ctrlr.o 00:02:17.367 CC lib/nvme/nvme_fabric.o 00:02:17.367 CC lib/nvme/nvme_ns_cmd.o 00:02:17.367 CC lib/nvme/nvme_ns.o 00:02:17.367 CC lib/nvme/nvme_pcie_common.o 00:02:17.367 CC lib/nvme/nvme_pcie.o 00:02:17.367 CC lib/nvme/nvme_qpair.o 00:02:17.367 CC lib/nvme/nvme.o 00:02:17.367 CC lib/nvme/nvme_quirks.o 00:02:17.367 CC lib/nvme/nvme_transport.o 00:02:17.367 CC lib/nvme/nvme_discovery.o 00:02:17.367 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:17.367 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:17.367 CC lib/nvme/nvme_tcp.o 00:02:17.367 CC lib/nvme/nvme_opal.o 00:02:17.367 CC lib/nvme/nvme_io_msg.o 00:02:17.367 CC lib/nvme/nvme_poll_group.o 00:02:17.367 CC lib/nvme/nvme_zns.o 00:02:17.367 CC lib/nvme/nvme_stubs.o 00:02:17.367 CC lib/nvme/nvme_auth.o 00:02:17.367 CC lib/nvme/nvme_cuse.o 00:02:17.367 CC lib/nvme/nvme_vfio_user.o 00:02:17.367 CC lib/nvme/nvme_rdma.o 00:02:17.939 LIB libspdk_thread.a 00:02:17.939 SO libspdk_thread.so.11.0 00:02:17.939 SYMLINK libspdk_thread.so 00:02:18.201 CC lib/init/json_config.o 00:02:18.201 CC lib/init/subsystem.o 00:02:18.201 CC lib/fsdev/fsdev.o 00:02:18.201 CC lib/init/subsystem_rpc.o 00:02:18.201 CC lib/fsdev/fsdev_io.o 00:02:18.201 CC lib/init/rpc.o 00:02:18.201 CC lib/fsdev/fsdev_rpc.o 00:02:18.201 CC lib/accel/accel.o 00:02:18.201 CC lib/accel/accel_rpc.o 00:02:18.201 CC lib/accel/accel_sw.o 00:02:18.201 CC lib/virtio/virtio.o 00:02:18.201 CC lib/virtio/virtio_vhost_user.o 00:02:18.201 CC lib/virtio/virtio_vfio_user.o 00:02:18.201 CC lib/virtio/virtio_pci.o 00:02:18.461 CC lib/blob/blobstore.o 00:02:18.461 CC lib/blob/request.o 00:02:18.461 CC lib/blob/zeroes.o 00:02:18.461 CC lib/blob/blob_bs_dev.o 00:02:18.461 CC lib/vfu_tgt/tgt_endpoint.o 00:02:18.461 CC lib/vfu_tgt/tgt_rpc.o 00:02:18.461 LIB libspdk_init.a 00:02:18.722 SO libspdk_init.so.6.0 00:02:18.722 LIB libspdk_virtio.a 00:02:18.722 LIB libspdk_vfu_tgt.a 00:02:18.722 SYMLINK libspdk_init.so 00:02:18.722 SO libspdk_vfu_tgt.so.3.0 00:02:18.722 SO libspdk_virtio.so.7.0 00:02:18.722 SYMLINK libspdk_vfu_tgt.so 00:02:18.722 SYMLINK libspdk_virtio.so 00:02:18.983 LIB libspdk_fsdev.a 00:02:18.983 SO libspdk_fsdev.so.2.0 00:02:18.983 CC lib/event/app.o 00:02:18.983 CC lib/event/reactor.o 00:02:18.983 CC lib/event/log_rpc.o 00:02:18.983 CC lib/event/app_rpc.o 00:02:18.983 CC lib/event/scheduler_static.o 00:02:18.983 SYMLINK libspdk_fsdev.so 00:02:19.245 LIB libspdk_accel.a 00:02:19.245 SO libspdk_accel.so.16.0 00:02:19.506 LIB libspdk_nvme.a 00:02:19.506 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:19.506 SYMLINK libspdk_accel.so 00:02:19.506 LIB libspdk_event.a 00:02:19.506 SO libspdk_nvme.so.15.0 00:02:19.506 SO libspdk_event.so.14.0 00:02:19.506 SYMLINK libspdk_event.so 00:02:19.767 SYMLINK libspdk_nvme.so 00:02:19.767 CC lib/bdev/bdev.o 00:02:19.767 CC lib/bdev/bdev_rpc.o 00:02:19.767 CC lib/bdev/bdev_zone.o 00:02:19.767 CC lib/bdev/part.o 00:02:19.767 CC lib/bdev/scsi_nvme.o 00:02:20.029 LIB libspdk_fuse_dispatcher.a 00:02:20.029 SO libspdk_fuse_dispatcher.so.1.0 00:02:20.029 SYMLINK libspdk_fuse_dispatcher.so 00:02:20.972 LIB libspdk_blob.a 00:02:20.972 SO libspdk_blob.so.11.0 00:02:21.232 SYMLINK libspdk_blob.so 00:02:21.493 CC lib/lvol/lvol.o 00:02:21.493 CC lib/blobfs/blobfs.o 00:02:21.493 CC lib/blobfs/tree.o 00:02:22.068 LIB libspdk_bdev.a 00:02:22.068 SO libspdk_bdev.so.17.0 00:02:22.329 LIB libspdk_blobfs.a 00:02:22.329 SO libspdk_blobfs.so.10.0 00:02:22.329 LIB libspdk_lvol.a 00:02:22.329 SYMLINK libspdk_bdev.so 00:02:22.329 SO libspdk_lvol.so.10.0 00:02:22.329 SYMLINK libspdk_blobfs.so 00:02:22.329 SYMLINK libspdk_lvol.so 00:02:22.590 CC lib/nvmf/ctrlr.o 00:02:22.590 CC lib/scsi/dev.o 00:02:22.590 CC lib/scsi/lun.o 00:02:22.590 CC lib/nvmf/ctrlr_discovery.o 00:02:22.590 CC lib/scsi/port.o 00:02:22.590 CC lib/ublk/ublk.o 00:02:22.590 CC lib/nvmf/ctrlr_bdev.o 00:02:22.590 CC lib/scsi/scsi.o 00:02:22.590 CC lib/ublk/ublk_rpc.o 00:02:22.590 CC lib/nvmf/subsystem.o 00:02:22.590 CC lib/scsi/scsi_bdev.o 00:02:22.590 CC lib/scsi/scsi_pr.o 00:02:22.590 CC lib/nvmf/nvmf.o 00:02:22.590 CC lib/scsi/scsi_rpc.o 00:02:22.590 CC lib/ftl/ftl_core.o 00:02:22.590 CC lib/nvmf/nvmf_rpc.o 00:02:22.590 CC lib/scsi/task.o 00:02:22.590 CC lib/ftl/ftl_init.o 00:02:22.590 CC lib/nvmf/transport.o 00:02:22.590 CC lib/nbd/nbd.o 00:02:22.590 CC lib/ftl/ftl_layout.o 00:02:22.590 CC lib/nvmf/tcp.o 00:02:22.590 CC lib/ftl/ftl_debug.o 00:02:22.590 CC lib/nbd/nbd_rpc.o 00:02:22.590 CC lib/nvmf/stubs.o 00:02:22.590 CC lib/ftl/ftl_io.o 00:02:22.590 CC lib/nvmf/mdns_server.o 00:02:22.590 CC lib/ftl/ftl_sb.o 00:02:22.590 CC lib/ftl/ftl_l2p.o 00:02:22.590 CC lib/nvmf/rdma.o 00:02:22.590 CC lib/ftl/ftl_l2p_flat.o 00:02:22.590 CC lib/nvmf/vfio_user.o 00:02:22.590 CC lib/nvmf/auth.o 00:02:22.590 CC lib/ftl/ftl_nv_cache.o 00:02:22.590 CC lib/ftl/ftl_band.o 00:02:22.590 CC lib/ftl/ftl_band_ops.o 00:02:22.590 CC lib/ftl/ftl_rq.o 00:02:22.590 CC lib/ftl/ftl_writer.o 00:02:22.590 CC lib/ftl/ftl_reloc.o 00:02:22.590 CC lib/ftl/ftl_l2p_cache.o 00:02:22.590 CC lib/ftl/ftl_p2l.o 00:02:22.590 CC lib/ftl/ftl_p2l_log.o 00:02:22.590 CC lib/ftl/mngt/ftl_mngt.o 00:02:22.590 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:22.590 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:22.590 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:22.590 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:22.590 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:22.590 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:22.590 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:22.590 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:22.590 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:22.590 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:22.590 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:22.590 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:22.590 CC lib/ftl/utils/ftl_conf.o 00:02:22.590 CC lib/ftl/utils/ftl_md.o 00:02:22.590 CC lib/ftl/utils/ftl_bitmap.o 00:02:22.854 CC lib/ftl/utils/ftl_mempool.o 00:02:22.854 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:22.854 CC lib/ftl/utils/ftl_property.o 00:02:22.854 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:22.854 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:22.854 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:22.854 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:22.854 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:22.854 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:22.854 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:22.854 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:22.854 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:22.854 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:22.854 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:22.854 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:22.854 CC lib/ftl/base/ftl_base_dev.o 00:02:22.854 CC lib/ftl/ftl_trace.o 00:02:22.854 CC lib/ftl/base/ftl_base_bdev.o 00:02:23.428 LIB libspdk_nbd.a 00:02:23.428 SO libspdk_nbd.so.7.0 00:02:23.428 LIB libspdk_ublk.a 00:02:23.691 LIB libspdk_scsi.a 00:02:23.691 SO libspdk_ublk.so.3.0 00:02:23.691 SYMLINK libspdk_nbd.so 00:02:23.691 SO libspdk_scsi.so.9.0 00:02:23.691 SYMLINK libspdk_ublk.so 00:02:23.691 SYMLINK libspdk_scsi.so 00:02:23.951 LIB libspdk_ftl.a 00:02:24.211 CC lib/iscsi/conn.o 00:02:24.211 CC lib/iscsi/init_grp.o 00:02:24.211 CC lib/iscsi/iscsi.o 00:02:24.211 CC lib/vhost/vhost.o 00:02:24.211 CC lib/iscsi/param.o 00:02:24.211 CC lib/vhost/vhost_rpc.o 00:02:24.211 CC lib/iscsi/portal_grp.o 00:02:24.211 CC lib/iscsi/tgt_node.o 00:02:24.211 CC lib/vhost/vhost_scsi.o 00:02:24.211 CC lib/iscsi/iscsi_subsystem.o 00:02:24.211 CC lib/vhost/vhost_blk.o 00:02:24.211 CC lib/iscsi/iscsi_rpc.o 00:02:24.211 CC lib/vhost/rte_vhost_user.o 00:02:24.211 CC lib/iscsi/task.o 00:02:24.211 SO libspdk_ftl.so.9.0 00:02:24.472 SYMLINK libspdk_ftl.so 00:02:24.732 LIB libspdk_nvmf.a 00:02:24.993 SO libspdk_nvmf.so.20.0 00:02:24.993 LIB libspdk_vhost.a 00:02:25.253 SO libspdk_vhost.so.8.0 00:02:25.253 SYMLINK libspdk_nvmf.so 00:02:25.253 SYMLINK libspdk_vhost.so 00:02:25.253 LIB libspdk_iscsi.a 00:02:25.514 SO libspdk_iscsi.so.8.0 00:02:25.514 SYMLINK libspdk_iscsi.so 00:02:26.085 CC module/vfu_device/vfu_virtio.o 00:02:26.085 CC module/env_dpdk/env_dpdk_rpc.o 00:02:26.085 CC module/vfu_device/vfu_virtio_blk.o 00:02:26.085 CC module/vfu_device/vfu_virtio_scsi.o 00:02:26.085 CC module/vfu_device/vfu_virtio_rpc.o 00:02:26.085 CC module/vfu_device/vfu_virtio_fs.o 00:02:26.345 LIB libspdk_env_dpdk_rpc.a 00:02:26.345 CC module/keyring/file/keyring.o 00:02:26.345 CC module/keyring/file/keyring_rpc.o 00:02:26.345 CC module/blob/bdev/blob_bdev.o 00:02:26.345 CC module/accel/error/accel_error.o 00:02:26.345 CC module/accel/error/accel_error_rpc.o 00:02:26.345 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:26.345 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:26.345 CC module/accel/iaa/accel_iaa.o 00:02:26.345 CC module/fsdev/aio/fsdev_aio.o 00:02:26.345 CC module/accel/iaa/accel_iaa_rpc.o 00:02:26.345 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:26.345 CC module/fsdev/aio/linux_aio_mgr.o 00:02:26.345 CC module/scheduler/gscheduler/gscheduler.o 00:02:26.346 CC module/sock/posix/posix.o 00:02:26.346 SO libspdk_env_dpdk_rpc.so.6.0 00:02:26.346 CC module/accel/dsa/accel_dsa.o 00:02:26.346 CC module/accel/ioat/accel_ioat.o 00:02:26.346 CC module/keyring/linux/keyring.o 00:02:26.346 CC module/accel/dsa/accel_dsa_rpc.o 00:02:26.346 CC module/accel/ioat/accel_ioat_rpc.o 00:02:26.346 CC module/keyring/linux/keyring_rpc.o 00:02:26.346 SYMLINK libspdk_env_dpdk_rpc.so 00:02:26.606 LIB libspdk_keyring_file.a 00:02:26.606 LIB libspdk_scheduler_dpdk_governor.a 00:02:26.606 LIB libspdk_keyring_linux.a 00:02:26.606 LIB libspdk_accel_error.a 00:02:26.606 LIB libspdk_scheduler_gscheduler.a 00:02:26.606 LIB libspdk_scheduler_dynamic.a 00:02:26.606 SO libspdk_keyring_file.so.2.0 00:02:26.606 LIB libspdk_accel_ioat.a 00:02:26.606 SO libspdk_scheduler_gscheduler.so.4.0 00:02:26.606 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:26.606 SO libspdk_keyring_linux.so.1.0 00:02:26.606 SO libspdk_accel_error.so.2.0 00:02:26.606 SO libspdk_scheduler_dynamic.so.4.0 00:02:26.606 LIB libspdk_accel_iaa.a 00:02:26.606 SYMLINK libspdk_keyring_file.so 00:02:26.606 SO libspdk_accel_ioat.so.6.0 00:02:26.606 LIB libspdk_blob_bdev.a 00:02:26.606 SO libspdk_accel_iaa.so.3.0 00:02:26.606 SYMLINK libspdk_keyring_linux.so 00:02:26.606 SYMLINK libspdk_scheduler_gscheduler.so 00:02:26.606 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:26.606 SYMLINK libspdk_accel_error.so 00:02:26.606 LIB libspdk_accel_dsa.a 00:02:26.606 SYMLINK libspdk_scheduler_dynamic.so 00:02:26.606 SYMLINK libspdk_accel_ioat.so 00:02:26.606 SO libspdk_blob_bdev.so.11.0 00:02:26.606 SO libspdk_accel_dsa.so.5.0 00:02:26.606 SYMLINK libspdk_accel_iaa.so 00:02:26.866 LIB libspdk_vfu_device.a 00:02:26.866 SYMLINK libspdk_blob_bdev.so 00:02:26.866 SYMLINK libspdk_accel_dsa.so 00:02:26.866 SO libspdk_vfu_device.so.3.0 00:02:26.866 SYMLINK libspdk_vfu_device.so 00:02:26.866 LIB libspdk_fsdev_aio.a 00:02:27.131 SO libspdk_fsdev_aio.so.1.0 00:02:27.131 LIB libspdk_sock_posix.a 00:02:27.131 SO libspdk_sock_posix.so.6.0 00:02:27.131 SYMLINK libspdk_fsdev_aio.so 00:02:27.131 SYMLINK libspdk_sock_posix.so 00:02:27.391 CC module/blobfs/bdev/blobfs_bdev.o 00:02:27.391 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:27.391 CC module/bdev/delay/vbdev_delay.o 00:02:27.392 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:27.392 CC module/bdev/split/vbdev_split_rpc.o 00:02:27.392 CC module/bdev/split/vbdev_split.o 00:02:27.392 CC module/bdev/gpt/gpt.o 00:02:27.392 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:27.392 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:27.392 CC module/bdev/gpt/vbdev_gpt.o 00:02:27.392 CC module/bdev/error/vbdev_error.o 00:02:27.392 CC module/bdev/lvol/vbdev_lvol.o 00:02:27.392 CC module/bdev/error/vbdev_error_rpc.o 00:02:27.392 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:27.392 CC module/bdev/raid/bdev_raid.o 00:02:27.392 CC module/bdev/raid/bdev_raid_rpc.o 00:02:27.392 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:27.392 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:27.392 CC module/bdev/raid/bdev_raid_sb.o 00:02:27.392 CC module/bdev/malloc/bdev_malloc.o 00:02:27.392 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:27.392 CC module/bdev/raid/raid0.o 00:02:27.392 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:27.392 CC module/bdev/aio/bdev_aio.o 00:02:27.392 CC module/bdev/passthru/vbdev_passthru.o 00:02:27.392 CC module/bdev/raid/raid1.o 00:02:27.392 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:27.392 CC module/bdev/raid/concat.o 00:02:27.392 CC module/bdev/aio/bdev_aio_rpc.o 00:02:27.392 CC module/bdev/ftl/bdev_ftl.o 00:02:27.392 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:27.392 CC module/bdev/null/bdev_null.o 00:02:27.392 CC module/bdev/iscsi/bdev_iscsi.o 00:02:27.392 CC module/bdev/null/bdev_null_rpc.o 00:02:27.392 CC module/bdev/nvme/bdev_nvme.o 00:02:27.392 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:27.392 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:27.392 CC module/bdev/nvme/nvme_rpc.o 00:02:27.392 CC module/bdev/nvme/bdev_mdns_client.o 00:02:27.392 CC module/bdev/nvme/vbdev_opal.o 00:02:27.392 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:27.392 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:27.652 LIB libspdk_blobfs_bdev.a 00:02:27.652 SO libspdk_blobfs_bdev.so.6.0 00:02:27.652 LIB libspdk_bdev_split.a 00:02:27.652 SO libspdk_bdev_split.so.6.0 00:02:27.652 LIB libspdk_bdev_error.a 00:02:27.652 SYMLINK libspdk_blobfs_bdev.so 00:02:27.652 SO libspdk_bdev_error.so.6.0 00:02:27.652 LIB libspdk_bdev_gpt.a 00:02:27.652 SYMLINK libspdk_bdev_split.so 00:02:27.652 SO libspdk_bdev_gpt.so.6.0 00:02:27.652 LIB libspdk_bdev_null.a 00:02:27.652 LIB libspdk_bdev_ftl.a 00:02:27.652 SYMLINK libspdk_bdev_error.so 00:02:27.652 LIB libspdk_bdev_passthru.a 00:02:27.652 LIB libspdk_bdev_zone_block.a 00:02:27.913 LIB libspdk_bdev_iscsi.a 00:02:27.913 SO libspdk_bdev_null.so.6.0 00:02:27.913 SO libspdk_bdev_passthru.so.6.0 00:02:27.913 SYMLINK libspdk_bdev_gpt.so 00:02:27.913 SO libspdk_bdev_zone_block.so.6.0 00:02:27.913 LIB libspdk_bdev_aio.a 00:02:27.913 SO libspdk_bdev_ftl.so.6.0 00:02:27.913 SO libspdk_bdev_iscsi.so.6.0 00:02:27.913 LIB libspdk_bdev_delay.a 00:02:27.913 LIB libspdk_bdev_malloc.a 00:02:27.913 SO libspdk_bdev_aio.so.6.0 00:02:27.913 SYMLINK libspdk_bdev_null.so 00:02:27.913 SYMLINK libspdk_bdev_ftl.so 00:02:27.913 SYMLINK libspdk_bdev_passthru.so 00:02:27.913 SO libspdk_bdev_malloc.so.6.0 00:02:27.913 SO libspdk_bdev_delay.so.6.0 00:02:27.913 SYMLINK libspdk_bdev_zone_block.so 00:02:27.913 SYMLINK libspdk_bdev_iscsi.so 00:02:27.913 LIB libspdk_bdev_lvol.a 00:02:27.913 SYMLINK libspdk_bdev_aio.so 00:02:27.913 LIB libspdk_bdev_virtio.a 00:02:27.913 SYMLINK libspdk_bdev_malloc.so 00:02:27.913 SYMLINK libspdk_bdev_delay.so 00:02:27.913 SO libspdk_bdev_lvol.so.6.0 00:02:27.913 SO libspdk_bdev_virtio.so.6.0 00:02:28.173 SYMLINK libspdk_bdev_lvol.so 00:02:28.173 SYMLINK libspdk_bdev_virtio.so 00:02:28.434 LIB libspdk_bdev_raid.a 00:02:28.434 SO libspdk_bdev_raid.so.6.0 00:02:28.434 SYMLINK libspdk_bdev_raid.so 00:02:29.817 LIB libspdk_bdev_nvme.a 00:02:29.817 SO libspdk_bdev_nvme.so.7.1 00:02:29.817 SYMLINK libspdk_bdev_nvme.so 00:02:30.758 CC module/event/subsystems/iobuf/iobuf.o 00:02:30.758 CC module/event/subsystems/vmd/vmd.o 00:02:30.758 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:30.758 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:30.758 CC module/event/subsystems/keyring/keyring.o 00:02:30.758 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:30.758 CC module/event/subsystems/sock/sock.o 00:02:30.758 CC module/event/subsystems/fsdev/fsdev.o 00:02:30.758 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:30.758 CC module/event/subsystems/scheduler/scheduler.o 00:02:30.758 LIB libspdk_event_vhost_blk.a 00:02:30.758 LIB libspdk_event_keyring.a 00:02:30.758 LIB libspdk_event_vfu_tgt.a 00:02:30.758 LIB libspdk_event_vmd.a 00:02:30.758 LIB libspdk_event_iobuf.a 00:02:30.758 LIB libspdk_event_sock.a 00:02:30.758 LIB libspdk_event_fsdev.a 00:02:30.758 LIB libspdk_event_scheduler.a 00:02:30.758 SO libspdk_event_vhost_blk.so.3.0 00:02:30.758 SO libspdk_event_vfu_tgt.so.3.0 00:02:30.758 SO libspdk_event_keyring.so.1.0 00:02:30.758 SO libspdk_event_vmd.so.6.0 00:02:30.758 SO libspdk_event_sock.so.5.0 00:02:30.758 SO libspdk_event_iobuf.so.3.0 00:02:30.758 SO libspdk_event_fsdev.so.1.0 00:02:30.758 SO libspdk_event_scheduler.so.4.0 00:02:30.758 SYMLINK libspdk_event_vfu_tgt.so 00:02:30.758 SYMLINK libspdk_event_vhost_blk.so 00:02:30.758 SYMLINK libspdk_event_keyring.so 00:02:30.758 SYMLINK libspdk_event_iobuf.so 00:02:30.758 SYMLINK libspdk_event_sock.so 00:02:30.758 SYMLINK libspdk_event_vmd.so 00:02:30.758 SYMLINK libspdk_event_fsdev.so 00:02:30.758 SYMLINK libspdk_event_scheduler.so 00:02:31.327 CC module/event/subsystems/accel/accel.o 00:02:31.327 LIB libspdk_event_accel.a 00:02:31.327 SO libspdk_event_accel.so.6.0 00:02:31.587 SYMLINK libspdk_event_accel.so 00:02:31.847 CC module/event/subsystems/bdev/bdev.o 00:02:32.107 LIB libspdk_event_bdev.a 00:02:32.107 SO libspdk_event_bdev.so.6.0 00:02:32.107 SYMLINK libspdk_event_bdev.so 00:02:32.368 CC module/event/subsystems/scsi/scsi.o 00:02:32.368 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:32.368 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:32.368 CC module/event/subsystems/nbd/nbd.o 00:02:32.368 CC module/event/subsystems/ublk/ublk.o 00:02:32.628 LIB libspdk_event_nbd.a 00:02:32.628 LIB libspdk_event_ublk.a 00:02:32.628 LIB libspdk_event_scsi.a 00:02:32.628 SO libspdk_event_nbd.so.6.0 00:02:32.628 SO libspdk_event_ublk.so.3.0 00:02:32.628 SO libspdk_event_scsi.so.6.0 00:02:32.628 LIB libspdk_event_nvmf.a 00:02:32.628 SYMLINK libspdk_event_nbd.so 00:02:32.628 SYMLINK libspdk_event_scsi.so 00:02:32.628 SO libspdk_event_nvmf.so.6.0 00:02:32.889 SYMLINK libspdk_event_ublk.so 00:02:32.889 SYMLINK libspdk_event_nvmf.so 00:02:33.149 CC module/event/subsystems/iscsi/iscsi.o 00:02:33.149 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:33.149 LIB libspdk_event_vhost_scsi.a 00:02:33.411 LIB libspdk_event_iscsi.a 00:02:33.411 SO libspdk_event_vhost_scsi.so.3.0 00:02:33.411 SO libspdk_event_iscsi.so.6.0 00:02:33.411 SYMLINK libspdk_event_vhost_scsi.so 00:02:33.411 SYMLINK libspdk_event_iscsi.so 00:02:33.671 SO libspdk.so.6.0 00:02:33.671 SYMLINK libspdk.so 00:02:33.931 CXX app/trace/trace.o 00:02:33.931 CC app/trace_record/trace_record.o 00:02:33.931 CC test/rpc_client/rpc_client_test.o 00:02:33.931 CC app/spdk_top/spdk_top.o 00:02:33.931 TEST_HEADER include/spdk/accel.h 00:02:33.931 CC app/spdk_nvme_discover/discovery_aer.o 00:02:33.931 TEST_HEADER include/spdk/accel_module.h 00:02:33.931 TEST_HEADER include/spdk/assert.h 00:02:33.931 TEST_HEADER include/spdk/barrier.h 00:02:33.931 CC app/spdk_nvme_identify/identify.o 00:02:33.931 TEST_HEADER include/spdk/base64.h 00:02:33.931 TEST_HEADER include/spdk/bdev.h 00:02:33.931 TEST_HEADER include/spdk/bdev_module.h 00:02:33.931 CC app/spdk_nvme_perf/perf.o 00:02:33.931 TEST_HEADER include/spdk/bdev_zone.h 00:02:33.931 TEST_HEADER include/spdk/bit_array.h 00:02:33.931 CC app/spdk_lspci/spdk_lspci.o 00:02:33.931 TEST_HEADER include/spdk/bit_pool.h 00:02:33.931 TEST_HEADER include/spdk/blob_bdev.h 00:02:33.931 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:33.931 TEST_HEADER include/spdk/blobfs.h 00:02:33.931 TEST_HEADER include/spdk/blob.h 00:02:33.931 TEST_HEADER include/spdk/conf.h 00:02:33.931 TEST_HEADER include/spdk/config.h 00:02:33.931 TEST_HEADER include/spdk/cpuset.h 00:02:33.931 TEST_HEADER include/spdk/crc16.h 00:02:34.200 TEST_HEADER include/spdk/crc32.h 00:02:34.200 TEST_HEADER include/spdk/crc64.h 00:02:34.200 TEST_HEADER include/spdk/dma.h 00:02:34.200 TEST_HEADER include/spdk/dif.h 00:02:34.200 TEST_HEADER include/spdk/env_dpdk.h 00:02:34.200 TEST_HEADER include/spdk/endian.h 00:02:34.200 TEST_HEADER include/spdk/env.h 00:02:34.200 TEST_HEADER include/spdk/event.h 00:02:34.200 TEST_HEADER include/spdk/fd_group.h 00:02:34.200 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:34.200 TEST_HEADER include/spdk/file.h 00:02:34.200 TEST_HEADER include/spdk/fd.h 00:02:34.200 TEST_HEADER include/spdk/fsdev.h 00:02:34.200 TEST_HEADER include/spdk/ftl.h 00:02:34.200 TEST_HEADER include/spdk/fsdev_module.h 00:02:34.200 TEST_HEADER include/spdk/gpt_spec.h 00:02:34.200 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:34.200 CC app/iscsi_tgt/iscsi_tgt.o 00:02:34.200 TEST_HEADER include/spdk/hexlify.h 00:02:34.200 CC app/spdk_dd/spdk_dd.o 00:02:34.200 CC app/nvmf_tgt/nvmf_main.o 00:02:34.200 TEST_HEADER include/spdk/histogram_data.h 00:02:34.200 TEST_HEADER include/spdk/idxd.h 00:02:34.200 TEST_HEADER include/spdk/ioat.h 00:02:34.200 TEST_HEADER include/spdk/idxd_spec.h 00:02:34.200 TEST_HEADER include/spdk/init.h 00:02:34.200 TEST_HEADER include/spdk/ioat_spec.h 00:02:34.200 TEST_HEADER include/spdk/iscsi_spec.h 00:02:34.200 TEST_HEADER include/spdk/json.h 00:02:34.200 TEST_HEADER include/spdk/jsonrpc.h 00:02:34.201 TEST_HEADER include/spdk/keyring.h 00:02:34.201 TEST_HEADER include/spdk/keyring_module.h 00:02:34.201 TEST_HEADER include/spdk/likely.h 00:02:34.201 TEST_HEADER include/spdk/lvol.h 00:02:34.201 TEST_HEADER include/spdk/log.h 00:02:34.201 TEST_HEADER include/spdk/md5.h 00:02:34.201 TEST_HEADER include/spdk/memory.h 00:02:34.201 TEST_HEADER include/spdk/mmio.h 00:02:34.201 TEST_HEADER include/spdk/net.h 00:02:34.201 TEST_HEADER include/spdk/nbd.h 00:02:34.201 TEST_HEADER include/spdk/notify.h 00:02:34.201 CC app/spdk_tgt/spdk_tgt.o 00:02:34.201 TEST_HEADER include/spdk/nvme.h 00:02:34.201 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:34.201 TEST_HEADER include/spdk/nvme_intel.h 00:02:34.201 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:34.201 TEST_HEADER include/spdk/nvme_spec.h 00:02:34.201 TEST_HEADER include/spdk/nvme_zns.h 00:02:34.201 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:34.201 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:34.201 TEST_HEADER include/spdk/nvmf_transport.h 00:02:34.201 TEST_HEADER include/spdk/nvmf.h 00:02:34.201 TEST_HEADER include/spdk/nvmf_spec.h 00:02:34.201 TEST_HEADER include/spdk/pci_ids.h 00:02:34.201 TEST_HEADER include/spdk/opal.h 00:02:34.201 TEST_HEADER include/spdk/opal_spec.h 00:02:34.201 TEST_HEADER include/spdk/pipe.h 00:02:34.201 TEST_HEADER include/spdk/queue.h 00:02:34.201 TEST_HEADER include/spdk/reduce.h 00:02:34.201 TEST_HEADER include/spdk/rpc.h 00:02:34.201 TEST_HEADER include/spdk/scheduler.h 00:02:34.201 TEST_HEADER include/spdk/scsi.h 00:02:34.201 TEST_HEADER include/spdk/scsi_spec.h 00:02:34.201 TEST_HEADER include/spdk/stdinc.h 00:02:34.201 TEST_HEADER include/spdk/sock.h 00:02:34.201 TEST_HEADER include/spdk/thread.h 00:02:34.201 TEST_HEADER include/spdk/string.h 00:02:34.201 TEST_HEADER include/spdk/trace.h 00:02:34.201 TEST_HEADER include/spdk/tree.h 00:02:34.201 TEST_HEADER include/spdk/trace_parser.h 00:02:34.201 TEST_HEADER include/spdk/ublk.h 00:02:34.201 TEST_HEADER include/spdk/util.h 00:02:34.201 TEST_HEADER include/spdk/uuid.h 00:02:34.201 TEST_HEADER include/spdk/version.h 00:02:34.201 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:34.201 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:34.201 TEST_HEADER include/spdk/vhost.h 00:02:34.201 TEST_HEADER include/spdk/zipf.h 00:02:34.201 TEST_HEADER include/spdk/xor.h 00:02:34.201 TEST_HEADER include/spdk/vmd.h 00:02:34.201 CXX test/cpp_headers/accel.o 00:02:34.201 CXX test/cpp_headers/accel_module.o 00:02:34.201 CXX test/cpp_headers/barrier.o 00:02:34.201 CXX test/cpp_headers/assert.o 00:02:34.201 CXX test/cpp_headers/base64.o 00:02:34.201 CXX test/cpp_headers/bdev.o 00:02:34.201 CXX test/cpp_headers/bdev_module.o 00:02:34.201 CXX test/cpp_headers/bdev_zone.o 00:02:34.201 CXX test/cpp_headers/bit_array.o 00:02:34.201 CXX test/cpp_headers/bit_pool.o 00:02:34.201 CXX test/cpp_headers/blob_bdev.o 00:02:34.201 CXX test/cpp_headers/blobfs_bdev.o 00:02:34.201 CXX test/cpp_headers/blobfs.o 00:02:34.201 CXX test/cpp_headers/blob.o 00:02:34.201 CXX test/cpp_headers/conf.o 00:02:34.201 CXX test/cpp_headers/config.o 00:02:34.201 CXX test/cpp_headers/crc16.o 00:02:34.201 CXX test/cpp_headers/cpuset.o 00:02:34.201 CXX test/cpp_headers/crc32.o 00:02:34.201 CXX test/cpp_headers/crc64.o 00:02:34.201 CXX test/cpp_headers/dif.o 00:02:34.201 CXX test/cpp_headers/endian.o 00:02:34.201 CXX test/cpp_headers/env_dpdk.o 00:02:34.201 CXX test/cpp_headers/dma.o 00:02:34.201 CXX test/cpp_headers/env.o 00:02:34.201 CXX test/cpp_headers/event.o 00:02:34.201 CXX test/cpp_headers/fd_group.o 00:02:34.201 CXX test/cpp_headers/file.o 00:02:34.201 CXX test/cpp_headers/fd.o 00:02:34.201 CXX test/cpp_headers/ftl.o 00:02:34.201 CXX test/cpp_headers/fsdev_module.o 00:02:34.201 CXX test/cpp_headers/fsdev.o 00:02:34.201 CXX test/cpp_headers/fuse_dispatcher.o 00:02:34.201 CXX test/cpp_headers/gpt_spec.o 00:02:34.201 CXX test/cpp_headers/histogram_data.o 00:02:34.201 CXX test/cpp_headers/idxd.o 00:02:34.201 CXX test/cpp_headers/idxd_spec.o 00:02:34.201 CXX test/cpp_headers/hexlify.o 00:02:34.201 CXX test/cpp_headers/init.o 00:02:34.201 CXX test/cpp_headers/ioat.o 00:02:34.201 CXX test/cpp_headers/iscsi_spec.o 00:02:34.201 CXX test/cpp_headers/json.o 00:02:34.201 CXX test/cpp_headers/ioat_spec.o 00:02:34.201 CXX test/cpp_headers/keyring_module.o 00:02:34.201 CXX test/cpp_headers/jsonrpc.o 00:02:34.201 CXX test/cpp_headers/likely.o 00:02:34.201 CXX test/cpp_headers/keyring.o 00:02:34.201 CXX test/cpp_headers/log.o 00:02:34.201 CXX test/cpp_headers/lvol.o 00:02:34.201 CC examples/util/zipf/zipf.o 00:02:34.201 CXX test/cpp_headers/memory.o 00:02:34.201 CXX test/cpp_headers/md5.o 00:02:34.201 CXX test/cpp_headers/net.o 00:02:34.201 CXX test/cpp_headers/notify.o 00:02:34.201 CXX test/cpp_headers/mmio.o 00:02:34.201 CXX test/cpp_headers/nbd.o 00:02:34.201 CXX test/cpp_headers/nvme.o 00:02:34.201 CXX test/cpp_headers/nvme_ocssd.o 00:02:34.201 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:34.201 CXX test/cpp_headers/nvme_intel.o 00:02:34.201 CXX test/cpp_headers/nvme_spec.o 00:02:34.201 CC examples/ioat/verify/verify.o 00:02:34.201 CXX test/cpp_headers/nvme_zns.o 00:02:34.201 CXX test/cpp_headers/nvmf_cmd.o 00:02:34.201 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:34.201 CXX test/cpp_headers/nvmf_transport.o 00:02:34.201 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:34.201 CXX test/cpp_headers/nvmf_spec.o 00:02:34.201 CXX test/cpp_headers/nvmf.o 00:02:34.201 CC examples/ioat/perf/perf.o 00:02:34.201 CXX test/cpp_headers/opal.o 00:02:34.201 CXX test/cpp_headers/pci_ids.o 00:02:34.471 CXX test/cpp_headers/queue.o 00:02:34.471 CXX test/cpp_headers/pipe.o 00:02:34.471 CXX test/cpp_headers/reduce.o 00:02:34.471 CXX test/cpp_headers/opal_spec.o 00:02:34.471 CXX test/cpp_headers/rpc.o 00:02:34.471 CXX test/cpp_headers/scsi.o 00:02:34.471 CXX test/cpp_headers/scheduler.o 00:02:34.471 CXX test/cpp_headers/sock.o 00:02:34.471 CC test/app/histogram_perf/histogram_perf.o 00:02:34.471 CXX test/cpp_headers/stdinc.o 00:02:34.471 CXX test/cpp_headers/string.o 00:02:34.471 CC test/env/pci/pci_ut.o 00:02:34.471 CXX test/cpp_headers/scsi_spec.o 00:02:34.471 CXX test/cpp_headers/trace.o 00:02:34.471 CXX test/cpp_headers/thread.o 00:02:34.471 CXX test/cpp_headers/trace_parser.o 00:02:34.471 CXX test/cpp_headers/ublk.o 00:02:34.471 CXX test/cpp_headers/tree.o 00:02:34.471 CC test/thread/poller_perf/poller_perf.o 00:02:34.471 CXX test/cpp_headers/uuid.o 00:02:34.471 CXX test/cpp_headers/version.o 00:02:34.471 CC test/env/vtophys/vtophys.o 00:02:34.471 CXX test/cpp_headers/vfio_user_spec.o 00:02:34.471 CXX test/cpp_headers/vfio_user_pci.o 00:02:34.471 CXX test/cpp_headers/util.o 00:02:34.471 CC test/dma/test_dma/test_dma.o 00:02:34.471 CXX test/cpp_headers/vmd.o 00:02:34.471 CXX test/cpp_headers/xor.o 00:02:34.471 CXX test/cpp_headers/vhost.o 00:02:34.471 CC app/fio/nvme/fio_plugin.o 00:02:34.471 CXX test/cpp_headers/zipf.o 00:02:34.471 CC test/app/jsoncat/jsoncat.o 00:02:34.471 LINK spdk_lspci 00:02:34.471 CC test/app/stub/stub.o 00:02:34.471 CC test/env/memory/memory_ut.o 00:02:34.471 CC test/app/bdev_svc/bdev_svc.o 00:02:34.471 LINK nvmf_tgt 00:02:34.744 CC app/fio/bdev/fio_plugin.o 00:02:34.744 LINK interrupt_tgt 00:02:34.744 LINK spdk_trace_record 00:02:34.744 LINK spdk_nvme_discover 00:02:34.744 LINK rpc_client_test 00:02:35.022 LINK iscsi_tgt 00:02:35.022 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:35.022 LINK poller_perf 00:02:35.022 CC test/env/mem_callbacks/mem_callbacks.o 00:02:35.284 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:35.284 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:35.284 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:35.284 LINK histogram_perf 00:02:35.284 LINK jsoncat 00:02:35.284 LINK spdk_tgt 00:02:35.544 LINK ioat_perf 00:02:35.544 LINK stub 00:02:35.544 LINK zipf 00:02:35.806 LINK env_dpdk_post_init 00:02:35.806 LINK vtophys 00:02:35.806 CC test/event/reactor_perf/reactor_perf.o 00:02:35.806 LINK bdev_svc 00:02:35.806 CC test/event/reactor/reactor.o 00:02:35.806 CC test/event/app_repeat/app_repeat.o 00:02:35.806 CC test/event/event_perf/event_perf.o 00:02:35.806 LINK spdk_dd 00:02:35.806 LINK verify 00:02:35.806 LINK pci_ut 00:02:35.806 CC test/event/scheduler/scheduler.o 00:02:35.806 LINK vhost_fuzz 00:02:35.806 LINK spdk_trace 00:02:35.806 LINK spdk_nvme 00:02:35.806 LINK test_dma 00:02:35.806 LINK nvme_fuzz 00:02:35.806 LINK reactor_perf 00:02:35.806 LINK spdk_bdev 00:02:35.806 LINK reactor 00:02:36.067 LINK event_perf 00:02:36.067 LINK spdk_nvme_perf 00:02:36.067 LINK spdk_top 00:02:36.067 LINK mem_callbacks 00:02:36.067 LINK app_repeat 00:02:36.067 LINK spdk_nvme_identify 00:02:36.067 LINK scheduler 00:02:36.067 CC examples/idxd/perf/perf.o 00:02:36.067 CC examples/vmd/lsvmd/lsvmd.o 00:02:36.067 CC examples/vmd/led/led.o 00:02:36.067 CC examples/sock/hello_world/hello_sock.o 00:02:36.067 CC examples/thread/thread/thread_ex.o 00:02:36.327 CC app/vhost/vhost.o 00:02:36.327 LINK lsvmd 00:02:36.327 LINK led 00:02:36.327 LINK memory_ut 00:02:36.620 LINK hello_sock 00:02:36.620 LINK thread 00:02:36.620 LINK idxd_perf 00:02:36.620 CC test/nvme/startup/startup.o 00:02:36.620 CC test/nvme/boot_partition/boot_partition.o 00:02:36.620 CC test/nvme/sgl/sgl.o 00:02:36.620 CC test/nvme/aer/aer.o 00:02:36.620 CC test/nvme/fdp/fdp.o 00:02:36.620 CC test/nvme/reserve/reserve.o 00:02:36.620 CC test/nvme/reset/reset.o 00:02:36.620 CC test/nvme/connect_stress/connect_stress.o 00:02:36.620 CC test/nvme/overhead/overhead.o 00:02:36.620 CC test/nvme/cuse/cuse.o 00:02:36.620 CC test/nvme/e2edp/nvme_dp.o 00:02:36.620 CC test/nvme/err_injection/err_injection.o 00:02:36.620 CC test/nvme/compliance/nvme_compliance.o 00:02:36.620 CC test/nvme/simple_copy/simple_copy.o 00:02:36.620 CC test/accel/dif/dif.o 00:02:36.620 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:36.620 CC test/nvme/fused_ordering/fused_ordering.o 00:02:36.620 LINK vhost 00:02:36.620 CC test/blobfs/mkfs/mkfs.o 00:02:36.620 CC test/lvol/esnap/esnap.o 00:02:36.928 LINK startup 00:02:36.928 LINK boot_partition 00:02:36.928 LINK connect_stress 00:02:36.928 LINK doorbell_aers 00:02:36.928 LINK err_injection 00:02:36.928 LINK fused_ordering 00:02:36.928 LINK reserve 00:02:36.928 LINK simple_copy 00:02:36.928 LINK mkfs 00:02:36.928 LINK reset 00:02:36.928 LINK sgl 00:02:36.928 LINK overhead 00:02:36.928 LINK nvme_dp 00:02:36.928 LINK aer 00:02:36.928 LINK iscsi_fuzz 00:02:36.928 LINK nvme_compliance 00:02:36.928 LINK fdp 00:02:36.928 CC examples/nvme/abort/abort.o 00:02:36.928 CC examples/nvme/hello_world/hello_world.o 00:02:36.928 CC examples/nvme/arbitration/arbitration.o 00:02:36.928 CC examples/nvme/hotplug/hotplug.o 00:02:36.928 CC examples/nvme/reconnect/reconnect.o 00:02:36.928 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:36.928 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:36.928 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:37.219 CC examples/accel/perf/accel_perf.o 00:02:37.219 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:37.219 CC examples/blob/hello_world/hello_blob.o 00:02:37.219 CC examples/blob/cli/blobcli.o 00:02:37.219 LINK dif 00:02:37.219 LINK pmr_persistence 00:02:37.219 LINK cmb_copy 00:02:37.219 LINK hello_world 00:02:37.219 LINK hotplug 00:02:37.512 LINK reconnect 00:02:37.512 LINK arbitration 00:02:37.512 LINK abort 00:02:37.512 LINK hello_blob 00:02:37.512 LINK hello_fsdev 00:02:37.512 LINK nvme_manage 00:02:37.512 LINK accel_perf 00:02:37.512 LINK blobcli 00:02:37.774 CC test/bdev/bdevio/bdevio.o 00:02:37.774 LINK cuse 00:02:38.035 CC examples/bdev/hello_world/hello_bdev.o 00:02:38.035 CC examples/bdev/bdevperf/bdevperf.o 00:02:38.296 LINK bdevio 00:02:38.296 LINK hello_bdev 00:02:38.867 LINK bdevperf 00:02:39.439 CC examples/nvmf/nvmf/nvmf.o 00:02:39.699 LINK nvmf 00:02:41.083 LINK esnap 00:02:41.655 00:02:41.655 real 0m57.189s 00:02:41.655 user 8m11.243s 00:02:41.655 sys 6m6.231s 00:02:41.655 13:44:27 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:41.655 13:44:27 make -- common/autotest_common.sh@10 -- $ set +x 00:02:41.655 ************************************ 00:02:41.655 END TEST make 00:02:41.655 ************************************ 00:02:41.655 13:44:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:41.655 13:44:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:41.655 13:44:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:41.655 13:44:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.655 13:44:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:41.655 13:44:27 -- pm/common@44 -- $ pid=2079590 00:02:41.655 13:44:27 -- pm/common@50 -- $ kill -TERM 2079590 00:02:41.655 13:44:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.655 13:44:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:41.655 13:44:27 -- pm/common@44 -- $ pid=2079591 00:02:41.655 13:44:27 -- pm/common@50 -- $ kill -TERM 2079591 00:02:41.655 13:44:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.655 13:44:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:41.655 13:44:27 -- pm/common@44 -- $ pid=2079593 00:02:41.655 13:44:27 -- pm/common@50 -- $ kill -TERM 2079593 00:02:41.655 13:44:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.655 13:44:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:41.655 13:44:27 -- pm/common@44 -- $ pid=2079616 00:02:41.655 13:44:27 -- pm/common@50 -- $ sudo -E kill -TERM 2079616 00:02:41.655 13:44:27 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:41.655 13:44:27 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:41.655 13:44:27 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:41.655 13:44:27 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:41.655 13:44:27 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:41.917 13:44:27 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:41.917 13:44:27 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:41.917 13:44:27 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:41.917 13:44:27 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:41.917 13:44:27 -- scripts/common.sh@336 -- # IFS=.-: 00:02:41.917 13:44:27 -- scripts/common.sh@336 -- # read -ra ver1 00:02:41.917 13:44:27 -- scripts/common.sh@337 -- # IFS=.-: 00:02:41.917 13:44:27 -- scripts/common.sh@337 -- # read -ra ver2 00:02:41.917 13:44:27 -- scripts/common.sh@338 -- # local 'op=<' 00:02:41.917 13:44:27 -- scripts/common.sh@340 -- # ver1_l=2 00:02:41.917 13:44:27 -- scripts/common.sh@341 -- # ver2_l=1 00:02:41.917 13:44:27 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:41.917 13:44:27 -- scripts/common.sh@344 -- # case "$op" in 00:02:41.917 13:44:27 -- scripts/common.sh@345 -- # : 1 00:02:41.917 13:44:27 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:41.917 13:44:27 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:41.917 13:44:27 -- scripts/common.sh@365 -- # decimal 1 00:02:41.917 13:44:27 -- scripts/common.sh@353 -- # local d=1 00:02:41.917 13:44:27 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:41.917 13:44:27 -- scripts/common.sh@355 -- # echo 1 00:02:41.917 13:44:27 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:41.917 13:44:27 -- scripts/common.sh@366 -- # decimal 2 00:02:41.917 13:44:27 -- scripts/common.sh@353 -- # local d=2 00:02:41.917 13:44:27 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:41.917 13:44:27 -- scripts/common.sh@355 -- # echo 2 00:02:41.917 13:44:27 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:41.917 13:44:27 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:41.917 13:44:27 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:41.917 13:44:27 -- scripts/common.sh@368 -- # return 0 00:02:41.917 13:44:27 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:41.917 13:44:27 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:41.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:41.917 --rc genhtml_branch_coverage=1 00:02:41.917 --rc genhtml_function_coverage=1 00:02:41.917 --rc genhtml_legend=1 00:02:41.917 --rc geninfo_all_blocks=1 00:02:41.917 --rc geninfo_unexecuted_blocks=1 00:02:41.917 00:02:41.917 ' 00:02:41.917 13:44:27 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:41.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:41.917 --rc genhtml_branch_coverage=1 00:02:41.917 --rc genhtml_function_coverage=1 00:02:41.917 --rc genhtml_legend=1 00:02:41.917 --rc geninfo_all_blocks=1 00:02:41.917 --rc geninfo_unexecuted_blocks=1 00:02:41.917 00:02:41.917 ' 00:02:41.917 13:44:27 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:41.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:41.917 --rc genhtml_branch_coverage=1 00:02:41.917 --rc genhtml_function_coverage=1 00:02:41.917 --rc genhtml_legend=1 00:02:41.917 --rc geninfo_all_blocks=1 00:02:41.917 --rc geninfo_unexecuted_blocks=1 00:02:41.917 00:02:41.917 ' 00:02:41.917 13:44:27 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:41.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:41.917 --rc genhtml_branch_coverage=1 00:02:41.917 --rc genhtml_function_coverage=1 00:02:41.917 --rc genhtml_legend=1 00:02:41.917 --rc geninfo_all_blocks=1 00:02:41.917 --rc geninfo_unexecuted_blocks=1 00:02:41.917 00:02:41.917 ' 00:02:41.917 13:44:27 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:41.917 13:44:27 -- nvmf/common.sh@7 -- # uname -s 00:02:41.917 13:44:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:41.917 13:44:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:41.917 13:44:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:41.917 13:44:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:41.917 13:44:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:41.917 13:44:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:41.917 13:44:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:41.917 13:44:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:41.917 13:44:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:41.917 13:44:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:41.917 13:44:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:02:41.917 13:44:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:02:41.917 13:44:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:41.917 13:44:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:41.917 13:44:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:41.917 13:44:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:41.917 13:44:27 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:41.917 13:44:27 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:41.917 13:44:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:41.917 13:44:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:41.917 13:44:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:41.917 13:44:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.917 13:44:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.917 13:44:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.917 13:44:28 -- paths/export.sh@5 -- # export PATH 00:02:41.917 13:44:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.917 13:44:28 -- nvmf/common.sh@51 -- # : 0 00:02:41.917 13:44:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:41.917 13:44:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:41.917 13:44:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:41.917 13:44:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:41.917 13:44:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:41.917 13:44:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:41.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:41.917 13:44:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:41.917 13:44:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:41.917 13:44:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:41.917 13:44:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:41.917 13:44:28 -- spdk/autotest.sh@32 -- # uname -s 00:02:41.917 13:44:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:41.917 13:44:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:41.917 13:44:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:41.917 13:44:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:41.917 13:44:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:41.917 13:44:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:41.917 13:44:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:41.917 13:44:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:41.917 13:44:28 -- spdk/autotest.sh@48 -- # udevadm_pid=2145469 00:02:41.917 13:44:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:41.917 13:44:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:41.917 13:44:28 -- pm/common@17 -- # local monitor 00:02:41.917 13:44:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.917 13:44:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.918 13:44:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.918 13:44:28 -- pm/common@21 -- # date +%s 00:02:41.918 13:44:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.918 13:44:28 -- pm/common@21 -- # date +%s 00:02:41.918 13:44:28 -- pm/common@25 -- # sleep 1 00:02:41.918 13:44:28 -- pm/common@21 -- # date +%s 00:02:41.918 13:44:28 -- pm/common@21 -- # date +%s 00:02:41.918 13:44:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730897068 00:02:41.918 13:44:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730897068 00:02:41.918 13:44:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730897068 00:02:41.918 13:44:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730897068 00:02:41.918 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730897068_collect-cpu-load.pm.log 00:02:41.918 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730897068_collect-vmstat.pm.log 00:02:41.918 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730897068_collect-cpu-temp.pm.log 00:02:41.918 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730897068_collect-bmc-pm.bmc.pm.log 00:02:42.859 13:44:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:42.859 13:44:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:42.859 13:44:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:42.859 13:44:29 -- common/autotest_common.sh@10 -- # set +x 00:02:42.859 13:44:29 -- spdk/autotest.sh@59 -- # create_test_list 00:02:42.859 13:44:29 -- common/autotest_common.sh@750 -- # xtrace_disable 00:02:42.859 13:44:29 -- common/autotest_common.sh@10 -- # set +x 00:02:42.859 13:44:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:42.859 13:44:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.859 13:44:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.859 13:44:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:42.859 13:44:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.859 13:44:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:42.859 13:44:29 -- common/autotest_common.sh@1455 -- # uname 00:02:42.859 13:44:29 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:42.859 13:44:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:42.859 13:44:29 -- common/autotest_common.sh@1475 -- # uname 00:02:42.859 13:44:29 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:42.859 13:44:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:42.859 13:44:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:43.121 lcov: LCOV version 1.15 00:02:43.121 13:44:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:58.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:58.028 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:16.145 13:44:59 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:16.145 13:44:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:16.145 13:44:59 -- common/autotest_common.sh@10 -- # set +x 00:03:16.145 13:44:59 -- spdk/autotest.sh@78 -- # rm -f 00:03:16.145 13:44:59 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.088 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:17.088 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:17.088 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:17.348 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:17.348 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:17.348 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:17.348 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:17.348 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:17.348 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:17.348 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:17.348 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:17.348 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:17.348 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:17.348 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:17.608 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:17.608 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:17.608 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:17.869 13:45:03 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:17.869 13:45:03 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:17.869 13:45:03 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:17.869 13:45:03 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:17.869 13:45:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:17.869 13:45:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:17.869 13:45:03 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:17.869 13:45:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:17.869 13:45:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:17.869 13:45:03 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:17.869 13:45:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:17.869 13:45:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:17.869 13:45:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:17.869 13:45:03 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:17.869 13:45:03 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:17.869 No valid GPT data, bailing 00:03:17.869 13:45:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:17.869 13:45:04 -- scripts/common.sh@394 -- # pt= 00:03:17.869 13:45:04 -- scripts/common.sh@395 -- # return 1 00:03:17.869 13:45:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:17.869 1+0 records in 00:03:17.869 1+0 records out 00:03:17.869 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00492858 s, 213 MB/s 00:03:17.869 13:45:04 -- spdk/autotest.sh@105 -- # sync 00:03:17.869 13:45:04 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:17.869 13:45:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:17.869 13:45:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:27.859 13:45:12 -- spdk/autotest.sh@111 -- # uname -s 00:03:27.859 13:45:12 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:27.859 13:45:12 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:27.859 13:45:12 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:30.406 Hugepages 00:03:30.406 node hugesize free / total 00:03:30.406 node0 1048576kB 0 / 0 00:03:30.406 node0 2048kB 0 / 0 00:03:30.406 node1 1048576kB 0 / 0 00:03:30.406 node1 2048kB 0 / 0 00:03:30.406 00:03:30.406 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:30.406 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:30.406 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:30.406 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:30.406 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:30.406 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:30.406 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:30.406 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:30.406 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:30.406 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:30.406 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:30.406 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:30.406 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:30.406 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:30.406 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:30.406 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:30.406 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:30.406 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:30.406 13:45:16 -- spdk/autotest.sh@117 -- # uname -s 00:03:30.406 13:45:16 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:30.406 13:45:16 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:30.406 13:45:16 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.707 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:33.707 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:33.707 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:33.707 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:33.707 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:33.707 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:33.707 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:33.967 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:33.967 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:33.967 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:33.967 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:33.967 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:33.967 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:33.967 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:33.967 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:33.967 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:35.880 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:36.140 13:45:22 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:37.083 13:45:23 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:37.083 13:45:23 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:37.083 13:45:23 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:37.083 13:45:23 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:37.083 13:45:23 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:37.083 13:45:23 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:37.083 13:45:23 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:37.083 13:45:23 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:37.083 13:45:23 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:37.083 13:45:23 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:37.083 13:45:23 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:37.083 13:45:23 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.290 Waiting for block devices as requested 00:03:41.290 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:41.290 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:41.290 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:41.290 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:41.290 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:41.290 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:41.290 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:41.290 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:41.290 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:41.551 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:41.551 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:41.551 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:41.811 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:41.811 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:41.811 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:42.071 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:42.071 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:42.333 13:45:28 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:42.333 13:45:28 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:42.333 13:45:28 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:42.333 13:45:28 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:42.333 13:45:28 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:42.333 13:45:28 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:42.333 13:45:28 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:42.333 13:45:28 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:42.333 13:45:28 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:42.333 13:45:28 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:42.333 13:45:28 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:42.333 13:45:28 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:42.333 13:45:28 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:42.333 13:45:28 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:42.333 13:45:28 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:42.333 13:45:28 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:42.333 13:45:28 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:42.333 13:45:28 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:42.333 13:45:28 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:42.333 13:45:28 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:42.333 13:45:28 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:42.333 13:45:28 -- common/autotest_common.sh@1541 -- # continue 00:03:42.333 13:45:28 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:42.333 13:45:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:42.333 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:03:42.594 13:45:28 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:42.594 13:45:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:42.594 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:03:42.594 13:45:28 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:45.913 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:45.913 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:45.913 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:45.913 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:45.913 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:45.913 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:45.913 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:45.913 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:46.172 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:46.172 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:46.172 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:46.172 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:46.172 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:46.172 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:46.172 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:46.172 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:46.172 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:46.433 13:45:32 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:46.433 13:45:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:46.433 13:45:32 -- common/autotest_common.sh@10 -- # set +x 00:03:46.693 13:45:32 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:46.693 13:45:32 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:46.693 13:45:32 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:46.693 13:45:32 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:46.693 13:45:32 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:46.693 13:45:32 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:46.693 13:45:32 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:46.693 13:45:32 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:46.693 13:45:32 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:46.693 13:45:32 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:46.693 13:45:32 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:46.693 13:45:32 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:46.693 13:45:32 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:46.693 13:45:32 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:46.693 13:45:32 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:46.693 13:45:32 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:46.693 13:45:32 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:46.693 13:45:32 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:03:46.693 13:45:32 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:46.693 13:45:32 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:46.693 13:45:32 -- common/autotest_common.sh@1570 -- # return 0 00:03:46.693 13:45:32 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:46.693 13:45:32 -- common/autotest_common.sh@1578 -- # return 0 00:03:46.693 13:45:32 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:46.693 13:45:32 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:46.693 13:45:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:46.693 13:45:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:46.693 13:45:32 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:46.693 13:45:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:46.693 13:45:32 -- common/autotest_common.sh@10 -- # set +x 00:03:46.693 13:45:32 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:46.693 13:45:32 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:46.693 13:45:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:46.693 13:45:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:46.693 13:45:32 -- common/autotest_common.sh@10 -- # set +x 00:03:46.693 ************************************ 00:03:46.693 START TEST env 00:03:46.693 ************************************ 00:03:46.693 13:45:32 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:46.954 * Looking for test storage... 00:03:46.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:46.954 13:45:33 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:46.954 13:45:33 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:46.954 13:45:33 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:46.954 13:45:33 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:46.954 13:45:33 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.954 13:45:33 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.954 13:45:33 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.954 13:45:33 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.954 13:45:33 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.954 13:45:33 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.954 13:45:33 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.954 13:45:33 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.954 13:45:33 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.954 13:45:33 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.954 13:45:33 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.954 13:45:33 env -- scripts/common.sh@344 -- # case "$op" in 00:03:46.954 13:45:33 env -- scripts/common.sh@345 -- # : 1 00:03:46.954 13:45:33 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.954 13:45:33 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.954 13:45:33 env -- scripts/common.sh@365 -- # decimal 1 00:03:46.954 13:45:33 env -- scripts/common.sh@353 -- # local d=1 00:03:46.954 13:45:33 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.954 13:45:33 env -- scripts/common.sh@355 -- # echo 1 00:03:46.954 13:45:33 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.954 13:45:33 env -- scripts/common.sh@366 -- # decimal 2 00:03:46.954 13:45:33 env -- scripts/common.sh@353 -- # local d=2 00:03:46.954 13:45:33 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.954 13:45:33 env -- scripts/common.sh@355 -- # echo 2 00:03:46.954 13:45:33 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.954 13:45:33 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.954 13:45:33 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.954 13:45:33 env -- scripts/common.sh@368 -- # return 0 00:03:46.954 13:45:33 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.954 13:45:33 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:46.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.954 --rc genhtml_branch_coverage=1 00:03:46.954 --rc genhtml_function_coverage=1 00:03:46.954 --rc genhtml_legend=1 00:03:46.954 --rc geninfo_all_blocks=1 00:03:46.954 --rc geninfo_unexecuted_blocks=1 00:03:46.954 00:03:46.954 ' 00:03:46.954 13:45:33 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:46.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.954 --rc genhtml_branch_coverage=1 00:03:46.954 --rc genhtml_function_coverage=1 00:03:46.954 --rc genhtml_legend=1 00:03:46.954 --rc geninfo_all_blocks=1 00:03:46.954 --rc geninfo_unexecuted_blocks=1 00:03:46.954 00:03:46.954 ' 00:03:46.954 13:45:33 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:46.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.954 --rc genhtml_branch_coverage=1 00:03:46.954 --rc genhtml_function_coverage=1 00:03:46.954 --rc genhtml_legend=1 00:03:46.954 --rc geninfo_all_blocks=1 00:03:46.954 --rc geninfo_unexecuted_blocks=1 00:03:46.954 00:03:46.954 ' 00:03:46.954 13:45:33 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:46.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.954 --rc genhtml_branch_coverage=1 00:03:46.954 --rc genhtml_function_coverage=1 00:03:46.954 --rc genhtml_legend=1 00:03:46.954 --rc geninfo_all_blocks=1 00:03:46.954 --rc geninfo_unexecuted_blocks=1 00:03:46.954 00:03:46.954 ' 00:03:46.954 13:45:33 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:46.954 13:45:33 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:46.954 13:45:33 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:46.954 13:45:33 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.954 ************************************ 00:03:46.954 START TEST env_memory 00:03:46.954 ************************************ 00:03:46.954 13:45:33 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:46.954 00:03:46.954 00:03:46.954 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.954 http://cunit.sourceforge.net/ 00:03:46.954 00:03:46.954 00:03:46.954 Suite: memory 00:03:46.954 Test: alloc and free memory map ...[2024-11-06 13:45:33.193065] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:46.954 passed 00:03:46.954 Test: mem map translation ...[2024-11-06 13:45:33.218708] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:46.954 [2024-11-06 13:45:33.218756] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:46.954 [2024-11-06 13:45:33.218803] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:46.954 [2024-11-06 13:45:33.218810] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:47.215 passed 00:03:47.215 Test: mem map registration ...[2024-11-06 13:45:33.273990] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:47.215 [2024-11-06 13:45:33.274020] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:47.215 passed 00:03:47.215 Test: mem map adjacent registrations ...passed 00:03:47.215 00:03:47.215 Run Summary: Type Total Ran Passed Failed Inactive 00:03:47.215 suites 1 1 n/a 0 0 00:03:47.215 tests 4 4 4 0 0 00:03:47.215 asserts 152 152 152 0 n/a 00:03:47.215 00:03:47.215 Elapsed time = 0.193 seconds 00:03:47.215 00:03:47.215 real 0m0.207s 00:03:47.215 user 0m0.197s 00:03:47.215 sys 0m0.010s 00:03:47.215 13:45:33 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:47.216 13:45:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:47.216 ************************************ 00:03:47.216 END TEST env_memory 00:03:47.216 ************************************ 00:03:47.216 13:45:33 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:47.216 13:45:33 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:47.216 13:45:33 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:47.216 13:45:33 env -- common/autotest_common.sh@10 -- # set +x 00:03:47.216 ************************************ 00:03:47.216 START TEST env_vtophys 00:03:47.216 ************************************ 00:03:47.216 13:45:33 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:47.216 EAL: lib.eal log level changed from notice to debug 00:03:47.216 EAL: Detected lcore 0 as core 0 on socket 0 00:03:47.216 EAL: Detected lcore 1 as core 1 on socket 0 00:03:47.216 EAL: Detected lcore 2 as core 2 on socket 0 00:03:47.216 EAL: Detected lcore 3 as core 3 on socket 0 00:03:47.216 EAL: Detected lcore 4 as core 4 on socket 0 00:03:47.216 EAL: Detected lcore 5 as core 5 on socket 0 00:03:47.216 EAL: Detected lcore 6 as core 6 on socket 0 00:03:47.216 EAL: Detected lcore 7 as core 7 on socket 0 00:03:47.216 EAL: Detected lcore 8 as core 8 on socket 0 00:03:47.216 EAL: Detected lcore 9 as core 9 on socket 0 00:03:47.216 EAL: Detected lcore 10 as core 10 on socket 0 00:03:47.216 EAL: Detected lcore 11 as core 11 on socket 0 00:03:47.216 EAL: Detected lcore 12 as core 12 on socket 0 00:03:47.216 EAL: Detected lcore 13 as core 13 on socket 0 00:03:47.216 EAL: Detected lcore 14 as core 14 on socket 0 00:03:47.216 EAL: Detected lcore 15 as core 15 on socket 0 00:03:47.216 EAL: Detected lcore 16 as core 16 on socket 0 00:03:47.216 EAL: Detected lcore 17 as core 17 on socket 0 00:03:47.216 EAL: Detected lcore 18 as core 18 on socket 0 00:03:47.216 EAL: Detected lcore 19 as core 19 on socket 0 00:03:47.216 EAL: Detected lcore 20 as core 20 on socket 0 00:03:47.216 EAL: Detected lcore 21 as core 21 on socket 0 00:03:47.216 EAL: Detected lcore 22 as core 22 on socket 0 00:03:47.216 EAL: Detected lcore 23 as core 23 on socket 0 00:03:47.216 EAL: Detected lcore 24 as core 24 on socket 0 00:03:47.216 EAL: Detected lcore 25 as core 25 on socket 0 00:03:47.216 EAL: Detected lcore 26 as core 26 on socket 0 00:03:47.216 EAL: Detected lcore 27 as core 27 on socket 0 00:03:47.216 EAL: Detected lcore 28 as core 28 on socket 0 00:03:47.216 EAL: Detected lcore 29 as core 29 on socket 0 00:03:47.216 EAL: Detected lcore 30 as core 30 on socket 0 00:03:47.216 EAL: Detected lcore 31 as core 31 on socket 0 00:03:47.216 EAL: Detected lcore 32 as core 32 on socket 0 00:03:47.216 EAL: Detected lcore 33 as core 33 on socket 0 00:03:47.216 EAL: Detected lcore 34 as core 34 on socket 0 00:03:47.216 EAL: Detected lcore 35 as core 35 on socket 0 00:03:47.216 EAL: Detected lcore 36 as core 0 on socket 1 00:03:47.216 EAL: Detected lcore 37 as core 1 on socket 1 00:03:47.216 EAL: Detected lcore 38 as core 2 on socket 1 00:03:47.216 EAL: Detected lcore 39 as core 3 on socket 1 00:03:47.216 EAL: Detected lcore 40 as core 4 on socket 1 00:03:47.216 EAL: Detected lcore 41 as core 5 on socket 1 00:03:47.216 EAL: Detected lcore 42 as core 6 on socket 1 00:03:47.216 EAL: Detected lcore 43 as core 7 on socket 1 00:03:47.216 EAL: Detected lcore 44 as core 8 on socket 1 00:03:47.216 EAL: Detected lcore 45 as core 9 on socket 1 00:03:47.216 EAL: Detected lcore 46 as core 10 on socket 1 00:03:47.216 EAL: Detected lcore 47 as core 11 on socket 1 00:03:47.216 EAL: Detected lcore 48 as core 12 on socket 1 00:03:47.216 EAL: Detected lcore 49 as core 13 on socket 1 00:03:47.216 EAL: Detected lcore 50 as core 14 on socket 1 00:03:47.216 EAL: Detected lcore 51 as core 15 on socket 1 00:03:47.216 EAL: Detected lcore 52 as core 16 on socket 1 00:03:47.216 EAL: Detected lcore 53 as core 17 on socket 1 00:03:47.216 EAL: Detected lcore 54 as core 18 on socket 1 00:03:47.216 EAL: Detected lcore 55 as core 19 on socket 1 00:03:47.216 EAL: Detected lcore 56 as core 20 on socket 1 00:03:47.216 EAL: Detected lcore 57 as core 21 on socket 1 00:03:47.216 EAL: Detected lcore 58 as core 22 on socket 1 00:03:47.216 EAL: Detected lcore 59 as core 23 on socket 1 00:03:47.216 EAL: Detected lcore 60 as core 24 on socket 1 00:03:47.216 EAL: Detected lcore 61 as core 25 on socket 1 00:03:47.216 EAL: Detected lcore 62 as core 26 on socket 1 00:03:47.216 EAL: Detected lcore 63 as core 27 on socket 1 00:03:47.216 EAL: Detected lcore 64 as core 28 on socket 1 00:03:47.216 EAL: Detected lcore 65 as core 29 on socket 1 00:03:47.216 EAL: Detected lcore 66 as core 30 on socket 1 00:03:47.216 EAL: Detected lcore 67 as core 31 on socket 1 00:03:47.216 EAL: Detected lcore 68 as core 32 on socket 1 00:03:47.216 EAL: Detected lcore 69 as core 33 on socket 1 00:03:47.216 EAL: Detected lcore 70 as core 34 on socket 1 00:03:47.216 EAL: Detected lcore 71 as core 35 on socket 1 00:03:47.216 EAL: Detected lcore 72 as core 0 on socket 0 00:03:47.216 EAL: Detected lcore 73 as core 1 on socket 0 00:03:47.216 EAL: Detected lcore 74 as core 2 on socket 0 00:03:47.216 EAL: Detected lcore 75 as core 3 on socket 0 00:03:47.216 EAL: Detected lcore 76 as core 4 on socket 0 00:03:47.216 EAL: Detected lcore 77 as core 5 on socket 0 00:03:47.216 EAL: Detected lcore 78 as core 6 on socket 0 00:03:47.216 EAL: Detected lcore 79 as core 7 on socket 0 00:03:47.216 EAL: Detected lcore 80 as core 8 on socket 0 00:03:47.216 EAL: Detected lcore 81 as core 9 on socket 0 00:03:47.216 EAL: Detected lcore 82 as core 10 on socket 0 00:03:47.216 EAL: Detected lcore 83 as core 11 on socket 0 00:03:47.216 EAL: Detected lcore 84 as core 12 on socket 0 00:03:47.216 EAL: Detected lcore 85 as core 13 on socket 0 00:03:47.216 EAL: Detected lcore 86 as core 14 on socket 0 00:03:47.216 EAL: Detected lcore 87 as core 15 on socket 0 00:03:47.216 EAL: Detected lcore 88 as core 16 on socket 0 00:03:47.216 EAL: Detected lcore 89 as core 17 on socket 0 00:03:47.216 EAL: Detected lcore 90 as core 18 on socket 0 00:03:47.216 EAL: Detected lcore 91 as core 19 on socket 0 00:03:47.216 EAL: Detected lcore 92 as core 20 on socket 0 00:03:47.216 EAL: Detected lcore 93 as core 21 on socket 0 00:03:47.216 EAL: Detected lcore 94 as core 22 on socket 0 00:03:47.216 EAL: Detected lcore 95 as core 23 on socket 0 00:03:47.216 EAL: Detected lcore 96 as core 24 on socket 0 00:03:47.216 EAL: Detected lcore 97 as core 25 on socket 0 00:03:47.216 EAL: Detected lcore 98 as core 26 on socket 0 00:03:47.216 EAL: Detected lcore 99 as core 27 on socket 0 00:03:47.216 EAL: Detected lcore 100 as core 28 on socket 0 00:03:47.216 EAL: Detected lcore 101 as core 29 on socket 0 00:03:47.216 EAL: Detected lcore 102 as core 30 on socket 0 00:03:47.216 EAL: Detected lcore 103 as core 31 on socket 0 00:03:47.216 EAL: Detected lcore 104 as core 32 on socket 0 00:03:47.216 EAL: Detected lcore 105 as core 33 on socket 0 00:03:47.216 EAL: Detected lcore 106 as core 34 on socket 0 00:03:47.216 EAL: Detected lcore 107 as core 35 on socket 0 00:03:47.216 EAL: Detected lcore 108 as core 0 on socket 1 00:03:47.216 EAL: Detected lcore 109 as core 1 on socket 1 00:03:47.216 EAL: Detected lcore 110 as core 2 on socket 1 00:03:47.216 EAL: Detected lcore 111 as core 3 on socket 1 00:03:47.216 EAL: Detected lcore 112 as core 4 on socket 1 00:03:47.216 EAL: Detected lcore 113 as core 5 on socket 1 00:03:47.216 EAL: Detected lcore 114 as core 6 on socket 1 00:03:47.216 EAL: Detected lcore 115 as core 7 on socket 1 00:03:47.216 EAL: Detected lcore 116 as core 8 on socket 1 00:03:47.216 EAL: Detected lcore 117 as core 9 on socket 1 00:03:47.216 EAL: Detected lcore 118 as core 10 on socket 1 00:03:47.216 EAL: Detected lcore 119 as core 11 on socket 1 00:03:47.216 EAL: Detected lcore 120 as core 12 on socket 1 00:03:47.216 EAL: Detected lcore 121 as core 13 on socket 1 00:03:47.216 EAL: Detected lcore 122 as core 14 on socket 1 00:03:47.216 EAL: Detected lcore 123 as core 15 on socket 1 00:03:47.216 EAL: Detected lcore 124 as core 16 on socket 1 00:03:47.216 EAL: Detected lcore 125 as core 17 on socket 1 00:03:47.216 EAL: Detected lcore 126 as core 18 on socket 1 00:03:47.216 EAL: Detected lcore 127 as core 19 on socket 1 00:03:47.216 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:47.216 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:47.216 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:47.216 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:47.216 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:47.216 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:47.216 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:47.216 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:47.216 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:47.216 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:47.216 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:47.216 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:47.216 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:47.216 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:47.216 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:47.216 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:47.216 EAL: Maximum logical cores by configuration: 128 00:03:47.216 EAL: Detected CPU lcores: 128 00:03:47.216 EAL: Detected NUMA nodes: 2 00:03:47.216 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:47.216 EAL: Detected shared linkage of DPDK 00:03:47.216 EAL: No shared files mode enabled, IPC will be disabled 00:03:47.216 EAL: Bus pci wants IOVA as 'DC' 00:03:47.216 EAL: Buses did not request a specific IOVA mode. 00:03:47.216 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:47.216 EAL: Selected IOVA mode 'VA' 00:03:47.216 EAL: Probing VFIO support... 00:03:47.216 EAL: IOMMU type 1 (Type 1) is supported 00:03:47.216 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:47.216 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:47.216 EAL: VFIO support initialized 00:03:47.216 EAL: Ask a virtual area of 0x2e000 bytes 00:03:47.216 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:47.216 EAL: Setting up physically contiguous memory... 00:03:47.216 EAL: Setting maximum number of open files to 524288 00:03:47.216 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:47.216 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:47.216 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:47.216 EAL: Ask a virtual area of 0x61000 bytes 00:03:47.216 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:47.216 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:47.216 EAL: Ask a virtual area of 0x400000000 bytes 00:03:47.216 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:47.217 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:47.217 EAL: Ask a virtual area of 0x61000 bytes 00:03:47.217 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:47.217 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:47.217 EAL: Ask a virtual area of 0x400000000 bytes 00:03:47.217 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:47.217 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:47.217 EAL: Ask a virtual area of 0x61000 bytes 00:03:47.217 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:47.217 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:47.217 EAL: Ask a virtual area of 0x400000000 bytes 00:03:47.217 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:47.217 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:47.217 EAL: Ask a virtual area of 0x61000 bytes 00:03:47.217 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:47.217 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:47.217 EAL: Ask a virtual area of 0x400000000 bytes 00:03:47.217 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:47.217 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:47.217 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:47.217 EAL: Ask a virtual area of 0x61000 bytes 00:03:47.217 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:47.217 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:47.217 EAL: Ask a virtual area of 0x400000000 bytes 00:03:47.217 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:47.217 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:47.217 EAL: Ask a virtual area of 0x61000 bytes 00:03:47.217 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:47.217 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:47.217 EAL: Ask a virtual area of 0x400000000 bytes 00:03:47.217 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:47.217 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:47.217 EAL: Ask a virtual area of 0x61000 bytes 00:03:47.217 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:47.217 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:47.217 EAL: Ask a virtual area of 0x400000000 bytes 00:03:47.217 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:47.217 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:47.217 EAL: Ask a virtual area of 0x61000 bytes 00:03:47.217 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:47.217 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:47.217 EAL: Ask a virtual area of 0x400000000 bytes 00:03:47.217 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:47.217 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:47.217 EAL: Hugepages will be freed exactly as allocated. 00:03:47.217 EAL: No shared files mode enabled, IPC is disabled 00:03:47.217 EAL: No shared files mode enabled, IPC is disabled 00:03:47.217 EAL: TSC frequency is ~2400000 KHz 00:03:47.217 EAL: Main lcore 0 is ready (tid=7f0afd18ca00;cpuset=[0]) 00:03:47.217 EAL: Trying to obtain current memory policy. 00:03:47.217 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.217 EAL: Restoring previous memory policy: 0 00:03:47.217 EAL: request: mp_malloc_sync 00:03:47.217 EAL: No shared files mode enabled, IPC is disabled 00:03:47.217 EAL: Heap on socket 0 was expanded by 2MB 00:03:47.217 EAL: No shared files mode enabled, IPC is disabled 00:03:47.477 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:47.477 EAL: Mem event callback 'spdk:(nil)' registered 00:03:47.477 00:03:47.477 00:03:47.477 CUnit - A unit testing framework for C - Version 2.1-3 00:03:47.477 http://cunit.sourceforge.net/ 00:03:47.477 00:03:47.477 00:03:47.477 Suite: components_suite 00:03:47.477 Test: vtophys_malloc_test ...passed 00:03:47.477 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:47.477 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.477 EAL: Restoring previous memory policy: 4 00:03:47.477 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.477 EAL: request: mp_malloc_sync 00:03:47.477 EAL: No shared files mode enabled, IPC is disabled 00:03:47.477 EAL: Heap on socket 0 was expanded by 4MB 00:03:47.477 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.477 EAL: request: mp_malloc_sync 00:03:47.477 EAL: No shared files mode enabled, IPC is disabled 00:03:47.477 EAL: Heap on socket 0 was shrunk by 4MB 00:03:47.477 EAL: Trying to obtain current memory policy. 00:03:47.477 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.477 EAL: Restoring previous memory policy: 4 00:03:47.477 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.477 EAL: request: mp_malloc_sync 00:03:47.477 EAL: No shared files mode enabled, IPC is disabled 00:03:47.477 EAL: Heap on socket 0 was expanded by 6MB 00:03:47.477 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.477 EAL: request: mp_malloc_sync 00:03:47.477 EAL: No shared files mode enabled, IPC is disabled 00:03:47.477 EAL: Heap on socket 0 was shrunk by 6MB 00:03:47.477 EAL: Trying to obtain current memory policy. 00:03:47.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.478 EAL: Restoring previous memory policy: 4 00:03:47.478 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.478 EAL: request: mp_malloc_sync 00:03:47.478 EAL: No shared files mode enabled, IPC is disabled 00:03:47.478 EAL: Heap on socket 0 was expanded by 10MB 00:03:47.478 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.478 EAL: request: mp_malloc_sync 00:03:47.478 EAL: No shared files mode enabled, IPC is disabled 00:03:47.478 EAL: Heap on socket 0 was shrunk by 10MB 00:03:47.478 EAL: Trying to obtain current memory policy. 00:03:47.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.478 EAL: Restoring previous memory policy: 4 00:03:47.478 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.478 EAL: request: mp_malloc_sync 00:03:47.478 EAL: No shared files mode enabled, IPC is disabled 00:03:47.478 EAL: Heap on socket 0 was expanded by 18MB 00:03:47.478 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.478 EAL: request: mp_malloc_sync 00:03:47.478 EAL: No shared files mode enabled, IPC is disabled 00:03:47.478 EAL: Heap on socket 0 was shrunk by 18MB 00:03:47.478 EAL: Trying to obtain current memory policy. 00:03:47.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.478 EAL: Restoring previous memory policy: 4 00:03:47.478 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.478 EAL: request: mp_malloc_sync 00:03:47.478 EAL: No shared files mode enabled, IPC is disabled 00:03:47.478 EAL: Heap on socket 0 was expanded by 34MB 00:03:47.478 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.478 EAL: request: mp_malloc_sync 00:03:47.478 EAL: No shared files mode enabled, IPC is disabled 00:03:47.478 EAL: Heap on socket 0 was shrunk by 34MB 00:03:47.478 EAL: Trying to obtain current memory policy. 00:03:47.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.478 EAL: Restoring previous memory policy: 4 00:03:47.478 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.478 EAL: request: mp_malloc_sync 00:03:47.478 EAL: No shared files mode enabled, IPC is disabled 00:03:47.478 EAL: Heap on socket 0 was expanded by 66MB 00:03:47.478 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.478 EAL: request: mp_malloc_sync 00:03:47.478 EAL: No shared files mode enabled, IPC is disabled 00:03:47.478 EAL: Heap on socket 0 was shrunk by 66MB 00:03:47.478 EAL: Trying to obtain current memory policy. 00:03:47.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.478 EAL: Restoring previous memory policy: 4 00:03:47.478 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.478 EAL: request: mp_malloc_sync 00:03:47.478 EAL: No shared files mode enabled, IPC is disabled 00:03:47.478 EAL: Heap on socket 0 was expanded by 130MB 00:03:47.478 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.478 EAL: request: mp_malloc_sync 00:03:47.478 EAL: No shared files mode enabled, IPC is disabled 00:03:47.478 EAL: Heap on socket 0 was shrunk by 130MB 00:03:47.478 EAL: Trying to obtain current memory policy. 00:03:47.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.478 EAL: Restoring previous memory policy: 4 00:03:47.478 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.478 EAL: request: mp_malloc_sync 00:03:47.478 EAL: No shared files mode enabled, IPC is disabled 00:03:47.478 EAL: Heap on socket 0 was expanded by 258MB 00:03:47.478 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.478 EAL: request: mp_malloc_sync 00:03:47.478 EAL: No shared files mode enabled, IPC is disabled 00:03:47.478 EAL: Heap on socket 0 was shrunk by 258MB 00:03:47.478 EAL: Trying to obtain current memory policy. 00:03:47.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.738 EAL: Restoring previous memory policy: 4 00:03:47.738 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.738 EAL: request: mp_malloc_sync 00:03:47.738 EAL: No shared files mode enabled, IPC is disabled 00:03:47.738 EAL: Heap on socket 0 was expanded by 514MB 00:03:47.738 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.738 EAL: request: mp_malloc_sync 00:03:47.738 EAL: No shared files mode enabled, IPC is disabled 00:03:47.738 EAL: Heap on socket 0 was shrunk by 514MB 00:03:47.738 EAL: Trying to obtain current memory policy. 00:03:47.738 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.998 EAL: Restoring previous memory policy: 4 00:03:47.998 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.998 EAL: request: mp_malloc_sync 00:03:47.998 EAL: No shared files mode enabled, IPC is disabled 00:03:47.998 EAL: Heap on socket 0 was expanded by 1026MB 00:03:47.998 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.998 EAL: request: mp_malloc_sync 00:03:47.998 EAL: No shared files mode enabled, IPC is disabled 00:03:47.998 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:47.998 passed 00:03:47.998 00:03:47.998 Run Summary: Type Total Ran Passed Failed Inactive 00:03:47.998 suites 1 1 n/a 0 0 00:03:47.998 tests 2 2 2 0 0 00:03:47.998 asserts 497 497 497 0 n/a 00:03:47.998 00:03:47.998 Elapsed time = 0.696 seconds 00:03:47.998 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.998 EAL: request: mp_malloc_sync 00:03:47.998 EAL: No shared files mode enabled, IPC is disabled 00:03:47.998 EAL: Heap on socket 0 was shrunk by 2MB 00:03:47.998 EAL: No shared files mode enabled, IPC is disabled 00:03:47.998 EAL: No shared files mode enabled, IPC is disabled 00:03:47.998 EAL: No shared files mode enabled, IPC is disabled 00:03:47.998 00:03:47.998 real 0m0.846s 00:03:47.998 user 0m0.444s 00:03:47.998 sys 0m0.376s 00:03:48.260 13:45:34 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:48.260 13:45:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:48.260 ************************************ 00:03:48.260 END TEST env_vtophys 00:03:48.260 ************************************ 00:03:48.260 13:45:34 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:48.260 13:45:34 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:48.260 13:45:34 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:48.260 13:45:34 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.260 ************************************ 00:03:48.260 START TEST env_pci 00:03:48.260 ************************************ 00:03:48.260 13:45:34 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:48.260 00:03:48.260 00:03:48.260 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.260 http://cunit.sourceforge.net/ 00:03:48.260 00:03:48.260 00:03:48.260 Suite: pci 00:03:48.260 Test: pci_hook ...[2024-11-06 13:45:34.375492] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2165486 has claimed it 00:03:48.260 EAL: Cannot find device (10000:00:01.0) 00:03:48.260 EAL: Failed to attach device on primary process 00:03:48.260 passed 00:03:48.260 00:03:48.260 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.260 suites 1 1 n/a 0 0 00:03:48.260 tests 1 1 1 0 0 00:03:48.260 asserts 25 25 25 0 n/a 00:03:48.260 00:03:48.260 Elapsed time = 0.031 seconds 00:03:48.260 00:03:48.260 real 0m0.053s 00:03:48.260 user 0m0.015s 00:03:48.260 sys 0m0.037s 00:03:48.260 13:45:34 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:48.260 13:45:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:48.260 ************************************ 00:03:48.260 END TEST env_pci 00:03:48.260 ************************************ 00:03:48.260 13:45:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:48.260 13:45:34 env -- env/env.sh@15 -- # uname 00:03:48.260 13:45:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:48.260 13:45:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:48.260 13:45:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:48.260 13:45:34 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:03:48.260 13:45:34 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:48.260 13:45:34 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.260 ************************************ 00:03:48.260 START TEST env_dpdk_post_init 00:03:48.260 ************************************ 00:03:48.260 13:45:34 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:48.260 EAL: Detected CPU lcores: 128 00:03:48.260 EAL: Detected NUMA nodes: 2 00:03:48.260 EAL: Detected shared linkage of DPDK 00:03:48.260 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:48.521 EAL: Selected IOVA mode 'VA' 00:03:48.521 EAL: VFIO support initialized 00:03:48.521 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:48.521 EAL: Using IOMMU type 1 (Type 1) 00:03:48.521 EAL: Ignore mapping IO port bar(1) 00:03:48.781 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:48.781 EAL: Ignore mapping IO port bar(1) 00:03:49.042 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:49.042 EAL: Ignore mapping IO port bar(1) 00:03:49.042 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:49.303 EAL: Ignore mapping IO port bar(1) 00:03:49.303 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:49.563 EAL: Ignore mapping IO port bar(1) 00:03:49.563 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:49.824 EAL: Ignore mapping IO port bar(1) 00:03:49.824 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:50.084 EAL: Ignore mapping IO port bar(1) 00:03:50.084 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:50.084 EAL: Ignore mapping IO port bar(1) 00:03:50.343 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:50.604 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:50.604 EAL: Ignore mapping IO port bar(1) 00:03:50.864 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:50.864 EAL: Ignore mapping IO port bar(1) 00:03:50.864 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:51.124 EAL: Ignore mapping IO port bar(1) 00:03:51.124 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:51.384 EAL: Ignore mapping IO port bar(1) 00:03:51.384 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:51.644 EAL: Ignore mapping IO port bar(1) 00:03:51.644 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:51.644 EAL: Ignore mapping IO port bar(1) 00:03:51.937 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:51.937 EAL: Ignore mapping IO port bar(1) 00:03:52.338 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:52.338 EAL: Ignore mapping IO port bar(1) 00:03:52.338 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:52.338 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:52.338 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:52.338 Starting DPDK initialization... 00:03:52.338 Starting SPDK post initialization... 00:03:52.338 SPDK NVMe probe 00:03:52.338 Attaching to 0000:65:00.0 00:03:52.338 Attached to 0000:65:00.0 00:03:52.338 Cleaning up... 00:03:54.271 00:03:54.271 real 0m5.744s 00:03:54.271 user 0m0.106s 00:03:54.271 sys 0m0.196s 00:03:54.271 13:45:40 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:54.271 13:45:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:54.271 ************************************ 00:03:54.271 END TEST env_dpdk_post_init 00:03:54.271 ************************************ 00:03:54.271 13:45:40 env -- env/env.sh@26 -- # uname 00:03:54.271 13:45:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:54.271 13:45:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:54.271 13:45:40 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:54.271 13:45:40 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:54.271 13:45:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:54.271 ************************************ 00:03:54.271 START TEST env_mem_callbacks 00:03:54.271 ************************************ 00:03:54.271 13:45:40 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:54.271 EAL: Detected CPU lcores: 128 00:03:54.271 EAL: Detected NUMA nodes: 2 00:03:54.271 EAL: Detected shared linkage of DPDK 00:03:54.271 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:54.271 EAL: Selected IOVA mode 'VA' 00:03:54.271 EAL: VFIO support initialized 00:03:54.271 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:54.271 00:03:54.271 00:03:54.271 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.271 http://cunit.sourceforge.net/ 00:03:54.271 00:03:54.271 00:03:54.271 Suite: memory 00:03:54.271 Test: test ... 00:03:54.271 register 0x200000200000 2097152 00:03:54.271 malloc 3145728 00:03:54.271 register 0x200000400000 4194304 00:03:54.271 buf 0x200000500000 len 3145728 PASSED 00:03:54.271 malloc 64 00:03:54.271 buf 0x2000004fff40 len 64 PASSED 00:03:54.271 malloc 4194304 00:03:54.271 register 0x200000800000 6291456 00:03:54.271 buf 0x200000a00000 len 4194304 PASSED 00:03:54.271 free 0x200000500000 3145728 00:03:54.271 free 0x2000004fff40 64 00:03:54.271 unregister 0x200000400000 4194304 PASSED 00:03:54.271 free 0x200000a00000 4194304 00:03:54.271 unregister 0x200000800000 6291456 PASSED 00:03:54.271 malloc 8388608 00:03:54.271 register 0x200000400000 10485760 00:03:54.272 buf 0x200000600000 len 8388608 PASSED 00:03:54.272 free 0x200000600000 8388608 00:03:54.272 unregister 0x200000400000 10485760 PASSED 00:03:54.272 passed 00:03:54.272 00:03:54.272 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.272 suites 1 1 n/a 0 0 00:03:54.272 tests 1 1 1 0 0 00:03:54.272 asserts 15 15 15 0 n/a 00:03:54.272 00:03:54.272 Elapsed time = 0.010 seconds 00:03:54.272 00:03:54.272 real 0m0.071s 00:03:54.272 user 0m0.018s 00:03:54.272 sys 0m0.054s 00:03:54.272 13:45:40 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:54.272 13:45:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:54.272 ************************************ 00:03:54.272 END TEST env_mem_callbacks 00:03:54.272 ************************************ 00:03:54.272 00:03:54.272 real 0m7.549s 00:03:54.272 user 0m1.048s 00:03:54.272 sys 0m1.066s 00:03:54.272 13:45:40 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:54.272 13:45:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:54.272 ************************************ 00:03:54.272 END TEST env 00:03:54.272 ************************************ 00:03:54.272 13:45:40 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:54.272 13:45:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:54.272 13:45:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:54.272 13:45:40 -- common/autotest_common.sh@10 -- # set +x 00:03:54.272 ************************************ 00:03:54.272 START TEST rpc 00:03:54.272 ************************************ 00:03:54.272 13:45:40 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:54.534 * Looking for test storage... 00:03:54.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:54.534 13:45:40 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:54.534 13:45:40 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:54.534 13:45:40 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:54.534 13:45:40 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:54.534 13:45:40 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:54.534 13:45:40 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:54.534 13:45:40 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:54.534 13:45:40 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.534 13:45:40 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:54.534 13:45:40 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:54.534 13:45:40 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:54.534 13:45:40 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:54.534 13:45:40 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:54.534 13:45:40 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:54.534 13:45:40 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:54.534 13:45:40 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:54.534 13:45:40 rpc -- scripts/common.sh@345 -- # : 1 00:03:54.534 13:45:40 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:54.534 13:45:40 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.534 13:45:40 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:54.534 13:45:40 rpc -- scripts/common.sh@353 -- # local d=1 00:03:54.534 13:45:40 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.534 13:45:40 rpc -- scripts/common.sh@355 -- # echo 1 00:03:54.534 13:45:40 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:54.534 13:45:40 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:54.534 13:45:40 rpc -- scripts/common.sh@353 -- # local d=2 00:03:54.534 13:45:40 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.534 13:45:40 rpc -- scripts/common.sh@355 -- # echo 2 00:03:54.534 13:45:40 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:54.534 13:45:40 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:54.534 13:45:40 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:54.534 13:45:40 rpc -- scripts/common.sh@368 -- # return 0 00:03:54.534 13:45:40 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.534 13:45:40 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:54.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.534 --rc genhtml_branch_coverage=1 00:03:54.534 --rc genhtml_function_coverage=1 00:03:54.534 --rc genhtml_legend=1 00:03:54.534 --rc geninfo_all_blocks=1 00:03:54.534 --rc geninfo_unexecuted_blocks=1 00:03:54.534 00:03:54.534 ' 00:03:54.534 13:45:40 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:54.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.534 --rc genhtml_branch_coverage=1 00:03:54.534 --rc genhtml_function_coverage=1 00:03:54.534 --rc genhtml_legend=1 00:03:54.534 --rc geninfo_all_blocks=1 00:03:54.534 --rc geninfo_unexecuted_blocks=1 00:03:54.534 00:03:54.534 ' 00:03:54.534 13:45:40 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:54.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.534 --rc genhtml_branch_coverage=1 00:03:54.534 --rc genhtml_function_coverage=1 00:03:54.534 --rc genhtml_legend=1 00:03:54.534 --rc geninfo_all_blocks=1 00:03:54.534 --rc geninfo_unexecuted_blocks=1 00:03:54.534 00:03:54.534 ' 00:03:54.534 13:45:40 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:54.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.534 --rc genhtml_branch_coverage=1 00:03:54.534 --rc genhtml_function_coverage=1 00:03:54.534 --rc genhtml_legend=1 00:03:54.534 --rc geninfo_all_blocks=1 00:03:54.534 --rc geninfo_unexecuted_blocks=1 00:03:54.534 00:03:54.534 ' 00:03:54.534 13:45:40 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:54.534 13:45:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2166840 00:03:54.534 13:45:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:54.534 13:45:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2166840 00:03:54.534 13:45:40 rpc -- common/autotest_common.sh@833 -- # '[' -z 2166840 ']' 00:03:54.534 13:45:40 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:54.534 13:45:40 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:54.534 13:45:40 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:54.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:54.534 13:45:40 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:54.534 13:45:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.534 [2024-11-06 13:45:40.781476] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:03:54.534 [2024-11-06 13:45:40.781539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166840 ] 00:03:54.794 [2024-11-06 13:45:40.875594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.794 [2024-11-06 13:45:40.928105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:54.794 [2024-11-06 13:45:40.928157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2166840' to capture a snapshot of events at runtime. 00:03:54.794 [2024-11-06 13:45:40.928168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:54.794 [2024-11-06 13:45:40.928177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:54.794 [2024-11-06 13:45:40.928183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2166840 for offline analysis/debug. 00:03:54.794 [2024-11-06 13:45:40.928944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.365 13:45:41 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:55.365 13:45:41 rpc -- common/autotest_common.sh@866 -- # return 0 00:03:55.365 13:45:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:55.365 13:45:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:55.365 13:45:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:55.365 13:45:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:55.365 13:45:41 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:55.365 13:45:41 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:55.365 13:45:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.365 ************************************ 00:03:55.365 START TEST rpc_integrity 00:03:55.365 ************************************ 00:03:55.365 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:55.365 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:55.365 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.365 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:55.627 { 00:03:55.627 "name": "Malloc0", 00:03:55.627 "aliases": [ 00:03:55.627 "43577773-aa06-4d17-ba54-49138538559d" 00:03:55.627 ], 00:03:55.627 "product_name": "Malloc disk", 00:03:55.627 "block_size": 512, 00:03:55.627 "num_blocks": 16384, 00:03:55.627 "uuid": "43577773-aa06-4d17-ba54-49138538559d", 00:03:55.627 "assigned_rate_limits": { 00:03:55.627 "rw_ios_per_sec": 0, 00:03:55.627 "rw_mbytes_per_sec": 0, 00:03:55.627 "r_mbytes_per_sec": 0, 00:03:55.627 "w_mbytes_per_sec": 0 00:03:55.627 }, 00:03:55.627 "claimed": false, 00:03:55.627 "zoned": false, 00:03:55.627 "supported_io_types": { 00:03:55.627 "read": true, 00:03:55.627 "write": true, 00:03:55.627 "unmap": true, 00:03:55.627 "flush": true, 00:03:55.627 "reset": true, 00:03:55.627 "nvme_admin": false, 00:03:55.627 "nvme_io": false, 00:03:55.627 "nvme_io_md": false, 00:03:55.627 "write_zeroes": true, 00:03:55.627 "zcopy": true, 00:03:55.627 "get_zone_info": false, 00:03:55.627 "zone_management": false, 00:03:55.627 "zone_append": false, 00:03:55.627 "compare": false, 00:03:55.627 "compare_and_write": false, 00:03:55.627 "abort": true, 00:03:55.627 "seek_hole": false, 00:03:55.627 "seek_data": false, 00:03:55.627 "copy": true, 00:03:55.627 "nvme_iov_md": false 00:03:55.627 }, 00:03:55.627 "memory_domains": [ 00:03:55.627 { 00:03:55.627 "dma_device_id": "system", 00:03:55.627 "dma_device_type": 1 00:03:55.627 }, 00:03:55.627 { 00:03:55.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.627 "dma_device_type": 2 00:03:55.627 } 00:03:55.627 ], 00:03:55.627 "driver_specific": {} 00:03:55.627 } 00:03:55.627 ]' 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.627 [2024-11-06 13:45:41.770186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:55.627 [2024-11-06 13:45:41.770232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:55.627 [2024-11-06 13:45:41.770250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24f9800 00:03:55.627 [2024-11-06 13:45:41.770259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:55.627 [2024-11-06 13:45:41.771827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:55.627 [2024-11-06 13:45:41.771864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:55.627 Passthru0 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:55.627 { 00:03:55.627 "name": "Malloc0", 00:03:55.627 "aliases": [ 00:03:55.627 "43577773-aa06-4d17-ba54-49138538559d" 00:03:55.627 ], 00:03:55.627 "product_name": "Malloc disk", 00:03:55.627 "block_size": 512, 00:03:55.627 "num_blocks": 16384, 00:03:55.627 "uuid": "43577773-aa06-4d17-ba54-49138538559d", 00:03:55.627 "assigned_rate_limits": { 00:03:55.627 "rw_ios_per_sec": 0, 00:03:55.627 "rw_mbytes_per_sec": 0, 00:03:55.627 "r_mbytes_per_sec": 0, 00:03:55.627 "w_mbytes_per_sec": 0 00:03:55.627 }, 00:03:55.627 "claimed": true, 00:03:55.627 "claim_type": "exclusive_write", 00:03:55.627 "zoned": false, 00:03:55.627 "supported_io_types": { 00:03:55.627 "read": true, 00:03:55.627 "write": true, 00:03:55.627 "unmap": true, 00:03:55.627 "flush": true, 00:03:55.627 "reset": true, 00:03:55.627 "nvme_admin": false, 00:03:55.627 "nvme_io": false, 00:03:55.627 "nvme_io_md": false, 00:03:55.627 "write_zeroes": true, 00:03:55.627 "zcopy": true, 00:03:55.627 "get_zone_info": false, 00:03:55.627 "zone_management": false, 00:03:55.627 "zone_append": false, 00:03:55.627 "compare": false, 00:03:55.627 "compare_and_write": false, 00:03:55.627 "abort": true, 00:03:55.627 "seek_hole": false, 00:03:55.627 "seek_data": false, 00:03:55.627 "copy": true, 00:03:55.627 "nvme_iov_md": false 00:03:55.627 }, 00:03:55.627 "memory_domains": [ 00:03:55.627 { 00:03:55.627 "dma_device_id": "system", 00:03:55.627 "dma_device_type": 1 00:03:55.627 }, 00:03:55.627 { 00:03:55.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.627 "dma_device_type": 2 00:03:55.627 } 00:03:55.627 ], 00:03:55.627 "driver_specific": {} 00:03:55.627 }, 00:03:55.627 { 00:03:55.627 "name": "Passthru0", 00:03:55.627 "aliases": [ 00:03:55.627 "06b6eca8-d683-5ca3-ba0e-8b7ebc1896b8" 00:03:55.627 ], 00:03:55.627 "product_name": "passthru", 00:03:55.627 "block_size": 512, 00:03:55.627 "num_blocks": 16384, 00:03:55.627 "uuid": "06b6eca8-d683-5ca3-ba0e-8b7ebc1896b8", 00:03:55.627 "assigned_rate_limits": { 00:03:55.627 "rw_ios_per_sec": 0, 00:03:55.627 "rw_mbytes_per_sec": 0, 00:03:55.627 "r_mbytes_per_sec": 0, 00:03:55.627 "w_mbytes_per_sec": 0 00:03:55.627 }, 00:03:55.627 "claimed": false, 00:03:55.627 "zoned": false, 00:03:55.627 "supported_io_types": { 00:03:55.627 "read": true, 00:03:55.627 "write": true, 00:03:55.627 "unmap": true, 00:03:55.627 "flush": true, 00:03:55.627 "reset": true, 00:03:55.627 "nvme_admin": false, 00:03:55.627 "nvme_io": false, 00:03:55.627 "nvme_io_md": false, 00:03:55.627 "write_zeroes": true, 00:03:55.627 "zcopy": true, 00:03:55.627 "get_zone_info": false, 00:03:55.627 "zone_management": false, 00:03:55.627 "zone_append": false, 00:03:55.627 "compare": false, 00:03:55.627 "compare_and_write": false, 00:03:55.627 "abort": true, 00:03:55.627 "seek_hole": false, 00:03:55.627 "seek_data": false, 00:03:55.627 "copy": true, 00:03:55.627 "nvme_iov_md": false 00:03:55.627 }, 00:03:55.627 "memory_domains": [ 00:03:55.627 { 00:03:55.627 "dma_device_id": "system", 00:03:55.627 "dma_device_type": 1 00:03:55.627 }, 00:03:55.627 { 00:03:55.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.627 "dma_device_type": 2 00:03:55.627 } 00:03:55.627 ], 00:03:55.627 "driver_specific": { 00:03:55.627 "passthru": { 00:03:55.627 "name": "Passthru0", 00:03:55.627 "base_bdev_name": "Malloc0" 00:03:55.627 } 00:03:55.627 } 00:03:55.627 } 00:03:55.627 ]' 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.627 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:55.627 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:55.889 13:45:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:55.889 00:03:55.889 real 0m0.281s 00:03:55.889 user 0m0.171s 00:03:55.889 sys 0m0.043s 00:03:55.889 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:55.889 13:45:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.889 ************************************ 00:03:55.889 END TEST rpc_integrity 00:03:55.889 ************************************ 00:03:55.889 13:45:41 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:55.889 13:45:41 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:55.889 13:45:41 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:55.889 13:45:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.889 ************************************ 00:03:55.889 START TEST rpc_plugins 00:03:55.889 ************************************ 00:03:55.889 13:45:41 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:03:55.889 13:45:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:55.889 13:45:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.889 13:45:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.889 13:45:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.889 13:45:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:55.889 13:45:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:55.889 13:45:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.889 13:45:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.889 13:45:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.889 13:45:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:55.889 { 00:03:55.889 "name": "Malloc1", 00:03:55.889 "aliases": [ 00:03:55.889 "b1e704c8-4a40-4326-8da8-28505a24efb7" 00:03:55.889 ], 00:03:55.889 "product_name": "Malloc disk", 00:03:55.889 "block_size": 4096, 00:03:55.889 "num_blocks": 256, 00:03:55.889 "uuid": "b1e704c8-4a40-4326-8da8-28505a24efb7", 00:03:55.889 "assigned_rate_limits": { 00:03:55.889 "rw_ios_per_sec": 0, 00:03:55.889 "rw_mbytes_per_sec": 0, 00:03:55.889 "r_mbytes_per_sec": 0, 00:03:55.889 "w_mbytes_per_sec": 0 00:03:55.889 }, 00:03:55.889 "claimed": false, 00:03:55.889 "zoned": false, 00:03:55.889 "supported_io_types": { 00:03:55.889 "read": true, 00:03:55.889 "write": true, 00:03:55.889 "unmap": true, 00:03:55.889 "flush": true, 00:03:55.889 "reset": true, 00:03:55.889 "nvme_admin": false, 00:03:55.889 "nvme_io": false, 00:03:55.889 "nvme_io_md": false, 00:03:55.889 "write_zeroes": true, 00:03:55.889 "zcopy": true, 00:03:55.889 "get_zone_info": false, 00:03:55.889 "zone_management": false, 00:03:55.889 "zone_append": false, 00:03:55.889 "compare": false, 00:03:55.889 "compare_and_write": false, 00:03:55.889 "abort": true, 00:03:55.889 "seek_hole": false, 00:03:55.889 "seek_data": false, 00:03:55.889 "copy": true, 00:03:55.889 "nvme_iov_md": false 00:03:55.889 }, 00:03:55.889 "memory_domains": [ 00:03:55.889 { 00:03:55.889 "dma_device_id": "system", 00:03:55.889 "dma_device_type": 1 00:03:55.889 }, 00:03:55.889 { 00:03:55.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.889 "dma_device_type": 2 00:03:55.889 } 00:03:55.889 ], 00:03:55.889 "driver_specific": {} 00:03:55.889 } 00:03:55.889 ]' 00:03:55.889 13:45:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:55.889 13:45:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:55.889 13:45:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:55.889 13:45:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.889 13:45:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.889 13:45:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.889 13:45:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:55.889 13:45:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.889 13:45:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.889 13:45:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.889 13:45:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:55.889 13:45:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:55.889 13:45:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:55.889 00:03:55.889 real 0m0.153s 00:03:55.889 user 0m0.094s 00:03:55.889 sys 0m0.020s 00:03:55.889 13:45:42 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:55.889 13:45:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.889 ************************************ 00:03:55.889 END TEST rpc_plugins 00:03:55.889 ************************************ 00:03:56.150 13:45:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:56.150 13:45:42 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:56.150 13:45:42 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:56.150 13:45:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.150 ************************************ 00:03:56.150 START TEST rpc_trace_cmd_test 00:03:56.150 ************************************ 00:03:56.150 13:45:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:03:56.150 13:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:56.150 13:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:56.150 13:45:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.150 13:45:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:56.150 13:45:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.150 13:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:56.150 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2166840", 00:03:56.150 "tpoint_group_mask": "0x8", 00:03:56.150 "iscsi_conn": { 00:03:56.150 "mask": "0x2", 00:03:56.150 "tpoint_mask": "0x0" 00:03:56.150 }, 00:03:56.150 "scsi": { 00:03:56.150 "mask": "0x4", 00:03:56.150 "tpoint_mask": "0x0" 00:03:56.150 }, 00:03:56.150 "bdev": { 00:03:56.150 "mask": "0x8", 00:03:56.150 "tpoint_mask": "0xffffffffffffffff" 00:03:56.150 }, 00:03:56.150 "nvmf_rdma": { 00:03:56.150 "mask": "0x10", 00:03:56.150 "tpoint_mask": "0x0" 00:03:56.150 }, 00:03:56.150 "nvmf_tcp": { 00:03:56.150 "mask": "0x20", 00:03:56.150 "tpoint_mask": "0x0" 00:03:56.150 }, 00:03:56.150 "ftl": { 00:03:56.150 "mask": "0x40", 00:03:56.150 "tpoint_mask": "0x0" 00:03:56.150 }, 00:03:56.150 "blobfs": { 00:03:56.150 "mask": "0x80", 00:03:56.150 "tpoint_mask": "0x0" 00:03:56.150 }, 00:03:56.150 "dsa": { 00:03:56.150 "mask": "0x200", 00:03:56.150 "tpoint_mask": "0x0" 00:03:56.150 }, 00:03:56.150 "thread": { 00:03:56.150 "mask": "0x400", 00:03:56.150 "tpoint_mask": "0x0" 00:03:56.150 }, 00:03:56.150 "nvme_pcie": { 00:03:56.150 "mask": "0x800", 00:03:56.150 "tpoint_mask": "0x0" 00:03:56.150 }, 00:03:56.150 "iaa": { 00:03:56.150 "mask": "0x1000", 00:03:56.150 "tpoint_mask": "0x0" 00:03:56.150 }, 00:03:56.150 "nvme_tcp": { 00:03:56.150 "mask": "0x2000", 00:03:56.150 "tpoint_mask": "0x0" 00:03:56.150 }, 00:03:56.150 "bdev_nvme": { 00:03:56.150 "mask": "0x4000", 00:03:56.150 "tpoint_mask": "0x0" 00:03:56.150 }, 00:03:56.150 "sock": { 00:03:56.150 "mask": "0x8000", 00:03:56.150 "tpoint_mask": "0x0" 00:03:56.150 }, 00:03:56.150 "blob": { 00:03:56.150 "mask": "0x10000", 00:03:56.150 "tpoint_mask": "0x0" 00:03:56.150 }, 00:03:56.150 "bdev_raid": { 00:03:56.150 "mask": "0x20000", 00:03:56.150 "tpoint_mask": "0x0" 00:03:56.150 }, 00:03:56.150 "scheduler": { 00:03:56.150 "mask": "0x40000", 00:03:56.150 "tpoint_mask": "0x0" 00:03:56.150 } 00:03:56.150 }' 00:03:56.150 13:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:56.150 13:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:56.150 13:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:56.150 13:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:56.150 13:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:56.150 13:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:56.150 13:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:56.411 13:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:56.411 13:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:56.411 13:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:56.411 00:03:56.411 real 0m0.252s 00:03:56.411 user 0m0.206s 00:03:56.411 sys 0m0.037s 00:03:56.411 13:45:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:56.412 13:45:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:56.412 ************************************ 00:03:56.412 END TEST rpc_trace_cmd_test 00:03:56.412 ************************************ 00:03:56.412 13:45:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:56.412 13:45:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:56.412 13:45:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:56.412 13:45:42 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:56.412 13:45:42 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:56.412 13:45:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.412 ************************************ 00:03:56.412 START TEST rpc_daemon_integrity 00:03:56.412 ************************************ 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:56.412 { 00:03:56.412 "name": "Malloc2", 00:03:56.412 "aliases": [ 00:03:56.412 "ef20d307-0d7d-4d3c-bab5-99dccbbc0890" 00:03:56.412 ], 00:03:56.412 "product_name": "Malloc disk", 00:03:56.412 "block_size": 512, 00:03:56.412 "num_blocks": 16384, 00:03:56.412 "uuid": "ef20d307-0d7d-4d3c-bab5-99dccbbc0890", 00:03:56.412 "assigned_rate_limits": { 00:03:56.412 "rw_ios_per_sec": 0, 00:03:56.412 "rw_mbytes_per_sec": 0, 00:03:56.412 "r_mbytes_per_sec": 0, 00:03:56.412 "w_mbytes_per_sec": 0 00:03:56.412 }, 00:03:56.412 "claimed": false, 00:03:56.412 "zoned": false, 00:03:56.412 "supported_io_types": { 00:03:56.412 "read": true, 00:03:56.412 "write": true, 00:03:56.412 "unmap": true, 00:03:56.412 "flush": true, 00:03:56.412 "reset": true, 00:03:56.412 "nvme_admin": false, 00:03:56.412 "nvme_io": false, 00:03:56.412 "nvme_io_md": false, 00:03:56.412 "write_zeroes": true, 00:03:56.412 "zcopy": true, 00:03:56.412 "get_zone_info": false, 00:03:56.412 "zone_management": false, 00:03:56.412 "zone_append": false, 00:03:56.412 "compare": false, 00:03:56.412 "compare_and_write": false, 00:03:56.412 "abort": true, 00:03:56.412 "seek_hole": false, 00:03:56.412 "seek_data": false, 00:03:56.412 "copy": true, 00:03:56.412 "nvme_iov_md": false 00:03:56.412 }, 00:03:56.412 "memory_domains": [ 00:03:56.412 { 00:03:56.412 "dma_device_id": "system", 00:03:56.412 "dma_device_type": 1 00:03:56.412 }, 00:03:56.412 { 00:03:56.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.412 "dma_device_type": 2 00:03:56.412 } 00:03:56.412 ], 00:03:56.412 "driver_specific": {} 00:03:56.412 } 00:03:56.412 ]' 00:03:56.412 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:56.672 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:56.672 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:56.672 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.672 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.672 [2024-11-06 13:45:42.696699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:56.672 [2024-11-06 13:45:42.696740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:56.672 [2024-11-06 13:45:42.696761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2446550 00:03:56.672 [2024-11-06 13:45:42.696768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:56.672 [2024-11-06 13:45:42.698303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:56.673 [2024-11-06 13:45:42.698338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:56.673 Passthru0 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:56.673 { 00:03:56.673 "name": "Malloc2", 00:03:56.673 "aliases": [ 00:03:56.673 "ef20d307-0d7d-4d3c-bab5-99dccbbc0890" 00:03:56.673 ], 00:03:56.673 "product_name": "Malloc disk", 00:03:56.673 "block_size": 512, 00:03:56.673 "num_blocks": 16384, 00:03:56.673 "uuid": "ef20d307-0d7d-4d3c-bab5-99dccbbc0890", 00:03:56.673 "assigned_rate_limits": { 00:03:56.673 "rw_ios_per_sec": 0, 00:03:56.673 "rw_mbytes_per_sec": 0, 00:03:56.673 "r_mbytes_per_sec": 0, 00:03:56.673 "w_mbytes_per_sec": 0 00:03:56.673 }, 00:03:56.673 "claimed": true, 00:03:56.673 "claim_type": "exclusive_write", 00:03:56.673 "zoned": false, 00:03:56.673 "supported_io_types": { 00:03:56.673 "read": true, 00:03:56.673 "write": true, 00:03:56.673 "unmap": true, 00:03:56.673 "flush": true, 00:03:56.673 "reset": true, 00:03:56.673 "nvme_admin": false, 00:03:56.673 "nvme_io": false, 00:03:56.673 "nvme_io_md": false, 00:03:56.673 "write_zeroes": true, 00:03:56.673 "zcopy": true, 00:03:56.673 "get_zone_info": false, 00:03:56.673 "zone_management": false, 00:03:56.673 "zone_append": false, 00:03:56.673 "compare": false, 00:03:56.673 "compare_and_write": false, 00:03:56.673 "abort": true, 00:03:56.673 "seek_hole": false, 00:03:56.673 "seek_data": false, 00:03:56.673 "copy": true, 00:03:56.673 "nvme_iov_md": false 00:03:56.673 }, 00:03:56.673 "memory_domains": [ 00:03:56.673 { 00:03:56.673 "dma_device_id": "system", 00:03:56.673 "dma_device_type": 1 00:03:56.673 }, 00:03:56.673 { 00:03:56.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.673 "dma_device_type": 2 00:03:56.673 } 00:03:56.673 ], 00:03:56.673 "driver_specific": {} 00:03:56.673 }, 00:03:56.673 { 00:03:56.673 "name": "Passthru0", 00:03:56.673 "aliases": [ 00:03:56.673 "536589ef-1467-5f9f-af65-686831e9c322" 00:03:56.673 ], 00:03:56.673 "product_name": "passthru", 00:03:56.673 "block_size": 512, 00:03:56.673 "num_blocks": 16384, 00:03:56.673 "uuid": "536589ef-1467-5f9f-af65-686831e9c322", 00:03:56.673 "assigned_rate_limits": { 00:03:56.673 "rw_ios_per_sec": 0, 00:03:56.673 "rw_mbytes_per_sec": 0, 00:03:56.673 "r_mbytes_per_sec": 0, 00:03:56.673 "w_mbytes_per_sec": 0 00:03:56.673 }, 00:03:56.673 "claimed": false, 00:03:56.673 "zoned": false, 00:03:56.673 "supported_io_types": { 00:03:56.673 "read": true, 00:03:56.673 "write": true, 00:03:56.673 "unmap": true, 00:03:56.673 "flush": true, 00:03:56.673 "reset": true, 00:03:56.673 "nvme_admin": false, 00:03:56.673 "nvme_io": false, 00:03:56.673 "nvme_io_md": false, 00:03:56.673 "write_zeroes": true, 00:03:56.673 "zcopy": true, 00:03:56.673 "get_zone_info": false, 00:03:56.673 "zone_management": false, 00:03:56.673 "zone_append": false, 00:03:56.673 "compare": false, 00:03:56.673 "compare_and_write": false, 00:03:56.673 "abort": true, 00:03:56.673 "seek_hole": false, 00:03:56.673 "seek_data": false, 00:03:56.673 "copy": true, 00:03:56.673 "nvme_iov_md": false 00:03:56.673 }, 00:03:56.673 "memory_domains": [ 00:03:56.673 { 00:03:56.673 "dma_device_id": "system", 00:03:56.673 "dma_device_type": 1 00:03:56.673 }, 00:03:56.673 { 00:03:56.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.673 "dma_device_type": 2 00:03:56.673 } 00:03:56.673 ], 00:03:56.673 "driver_specific": { 00:03:56.673 "passthru": { 00:03:56.673 "name": "Passthru0", 00:03:56.673 "base_bdev_name": "Malloc2" 00:03:56.673 } 00:03:56.673 } 00:03:56.673 } 00:03:56.673 ]' 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:56.673 00:03:56.673 real 0m0.303s 00:03:56.673 user 0m0.195s 00:03:56.673 sys 0m0.041s 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:56.673 13:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.673 ************************************ 00:03:56.673 END TEST rpc_daemon_integrity 00:03:56.673 ************************************ 00:03:56.673 13:45:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:56.673 13:45:42 rpc -- rpc/rpc.sh@84 -- # killprocess 2166840 00:03:56.673 13:45:42 rpc -- common/autotest_common.sh@952 -- # '[' -z 2166840 ']' 00:03:56.673 13:45:42 rpc -- common/autotest_common.sh@956 -- # kill -0 2166840 00:03:56.673 13:45:42 rpc -- common/autotest_common.sh@957 -- # uname 00:03:56.673 13:45:42 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:56.673 13:45:42 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2166840 00:03:56.933 13:45:42 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:56.933 13:45:42 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:56.933 13:45:42 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2166840' 00:03:56.933 killing process with pid 2166840 00:03:56.933 13:45:42 rpc -- common/autotest_common.sh@971 -- # kill 2166840 00:03:56.933 13:45:42 rpc -- common/autotest_common.sh@976 -- # wait 2166840 00:03:56.933 00:03:56.933 real 0m2.680s 00:03:56.933 user 0m3.401s 00:03:56.933 sys 0m0.823s 00:03:56.933 13:45:43 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:56.933 13:45:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.934 ************************************ 00:03:56.934 END TEST rpc 00:03:56.934 ************************************ 00:03:57.194 13:45:43 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:57.194 13:45:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:57.194 13:45:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:57.194 13:45:43 -- common/autotest_common.sh@10 -- # set +x 00:03:57.194 ************************************ 00:03:57.194 START TEST skip_rpc 00:03:57.194 ************************************ 00:03:57.194 13:45:43 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:57.194 * Looking for test storage... 00:03:57.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:57.194 13:45:43 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:57.194 13:45:43 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:57.194 13:45:43 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:57.194 13:45:43 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:57.194 13:45:43 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.194 13:45:43 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.194 13:45:43 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.194 13:45:43 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.194 13:45:43 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.194 13:45:43 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.194 13:45:43 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.194 13:45:43 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.194 13:45:43 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.194 13:45:43 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.194 13:45:43 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.454 13:45:43 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:57.454 13:45:43 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:57.454 13:45:43 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.454 13:45:43 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.454 13:45:43 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:57.454 13:45:43 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:57.454 13:45:43 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.454 13:45:43 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:57.454 13:45:43 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.454 13:45:43 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:57.454 13:45:43 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:57.454 13:45:43 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.454 13:45:43 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:57.454 13:45:43 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.455 13:45:43 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.455 13:45:43 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.455 13:45:43 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:57.455 13:45:43 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.455 13:45:43 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:57.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.455 --rc genhtml_branch_coverage=1 00:03:57.455 --rc genhtml_function_coverage=1 00:03:57.455 --rc genhtml_legend=1 00:03:57.455 --rc geninfo_all_blocks=1 00:03:57.455 --rc geninfo_unexecuted_blocks=1 00:03:57.455 00:03:57.455 ' 00:03:57.455 13:45:43 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:57.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.455 --rc genhtml_branch_coverage=1 00:03:57.455 --rc genhtml_function_coverage=1 00:03:57.455 --rc genhtml_legend=1 00:03:57.455 --rc geninfo_all_blocks=1 00:03:57.455 --rc geninfo_unexecuted_blocks=1 00:03:57.455 00:03:57.455 ' 00:03:57.455 13:45:43 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:57.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.455 --rc genhtml_branch_coverage=1 00:03:57.455 --rc genhtml_function_coverage=1 00:03:57.455 --rc genhtml_legend=1 00:03:57.455 --rc geninfo_all_blocks=1 00:03:57.455 --rc geninfo_unexecuted_blocks=1 00:03:57.455 00:03:57.455 ' 00:03:57.455 13:45:43 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:57.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.455 --rc genhtml_branch_coverage=1 00:03:57.455 --rc genhtml_function_coverage=1 00:03:57.455 --rc genhtml_legend=1 00:03:57.455 --rc geninfo_all_blocks=1 00:03:57.455 --rc geninfo_unexecuted_blocks=1 00:03:57.455 00:03:57.455 ' 00:03:57.455 13:45:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:57.455 13:45:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:57.455 13:45:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:57.455 13:45:43 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:57.455 13:45:43 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:57.455 13:45:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.455 ************************************ 00:03:57.455 START TEST skip_rpc 00:03:57.455 ************************************ 00:03:57.455 13:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:03:57.455 13:45:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2167695 00:03:57.455 13:45:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:57.455 13:45:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:57.455 13:45:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:57.455 [2024-11-06 13:45:43.592842] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:03:57.455 [2024-11-06 13:45:43.592898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2167695 ] 00:03:57.455 [2024-11-06 13:45:43.685668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.715 [2024-11-06 13:45:43.738011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2167695 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 2167695 ']' 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 2167695 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2167695 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2167695' 00:04:03.001 killing process with pid 2167695 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 2167695 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 2167695 00:04:03.001 00:04:03.001 real 0m5.266s 00:04:03.001 user 0m4.996s 00:04:03.001 sys 0m0.307s 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:03.001 13:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.001 ************************************ 00:04:03.001 END TEST skip_rpc 00:04:03.001 ************************************ 00:04:03.001 13:45:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:03.001 13:45:48 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:03.001 13:45:48 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:03.001 13:45:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.001 ************************************ 00:04:03.001 START TEST skip_rpc_with_json 00:04:03.001 ************************************ 00:04:03.001 13:45:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:03.001 13:45:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:03.001 13:45:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2168730 00:04:03.001 13:45:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.001 13:45:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2168730 00:04:03.001 13:45:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:03.001 13:45:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 2168730 ']' 00:04:03.001 13:45:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.001 13:45:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:03.001 13:45:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.001 13:45:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:03.001 13:45:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.001 [2024-11-06 13:45:48.936487] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:03.001 [2024-11-06 13:45:48.936536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168730 ] 00:04:03.001 [2024-11-06 13:45:49.021058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.001 [2024-11-06 13:45:49.051573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.570 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:03.570 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:03.570 13:45:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:03.570 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.570 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.570 [2024-11-06 13:45:49.740274] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:03.570 request: 00:04:03.570 { 00:04:03.570 "trtype": "tcp", 00:04:03.570 "method": "nvmf_get_transports", 00:04:03.570 "req_id": 1 00:04:03.570 } 00:04:03.570 Got JSON-RPC error response 00:04:03.570 response: 00:04:03.570 { 00:04:03.570 "code": -19, 00:04:03.570 "message": "No such device" 00:04:03.570 } 00:04:03.570 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:03.570 13:45:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:03.570 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.570 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.570 [2024-11-06 13:45:49.752367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:03.570 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.570 13:45:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:03.570 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.570 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.830 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.831 13:45:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:03.831 { 00:04:03.831 "subsystems": [ 00:04:03.831 { 00:04:03.831 "subsystem": "fsdev", 00:04:03.831 "config": [ 00:04:03.831 { 00:04:03.831 "method": "fsdev_set_opts", 00:04:03.831 "params": { 00:04:03.831 "fsdev_io_pool_size": 65535, 00:04:03.831 "fsdev_io_cache_size": 256 00:04:03.831 } 00:04:03.831 } 00:04:03.831 ] 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "subsystem": "vfio_user_target", 00:04:03.831 "config": null 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "subsystem": "keyring", 00:04:03.831 "config": [] 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "subsystem": "iobuf", 00:04:03.831 "config": [ 00:04:03.831 { 00:04:03.831 "method": "iobuf_set_options", 00:04:03.831 "params": { 00:04:03.831 "small_pool_count": 8192, 00:04:03.831 "large_pool_count": 1024, 00:04:03.831 "small_bufsize": 8192, 00:04:03.831 "large_bufsize": 135168, 00:04:03.831 "enable_numa": false 00:04:03.831 } 00:04:03.831 } 00:04:03.831 ] 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "subsystem": "sock", 00:04:03.831 "config": [ 00:04:03.831 { 00:04:03.831 "method": "sock_set_default_impl", 00:04:03.831 "params": { 00:04:03.831 "impl_name": "posix" 00:04:03.831 } 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "method": "sock_impl_set_options", 00:04:03.831 "params": { 00:04:03.831 "impl_name": "ssl", 00:04:03.831 "recv_buf_size": 4096, 00:04:03.831 "send_buf_size": 4096, 00:04:03.831 "enable_recv_pipe": true, 00:04:03.831 "enable_quickack": false, 00:04:03.831 "enable_placement_id": 0, 00:04:03.831 "enable_zerocopy_send_server": true, 00:04:03.831 "enable_zerocopy_send_client": false, 00:04:03.831 "zerocopy_threshold": 0, 00:04:03.831 "tls_version": 0, 00:04:03.831 "enable_ktls": false 00:04:03.831 } 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "method": "sock_impl_set_options", 00:04:03.831 "params": { 00:04:03.831 "impl_name": "posix", 00:04:03.831 "recv_buf_size": 2097152, 00:04:03.831 "send_buf_size": 2097152, 00:04:03.831 "enable_recv_pipe": true, 00:04:03.831 "enable_quickack": false, 00:04:03.831 "enable_placement_id": 0, 00:04:03.831 "enable_zerocopy_send_server": true, 00:04:03.831 "enable_zerocopy_send_client": false, 00:04:03.831 "zerocopy_threshold": 0, 00:04:03.831 "tls_version": 0, 00:04:03.831 "enable_ktls": false 00:04:03.831 } 00:04:03.831 } 00:04:03.831 ] 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "subsystem": "vmd", 00:04:03.831 "config": [] 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "subsystem": "accel", 00:04:03.831 "config": [ 00:04:03.831 { 00:04:03.831 "method": "accel_set_options", 00:04:03.831 "params": { 00:04:03.831 "small_cache_size": 128, 00:04:03.831 "large_cache_size": 16, 00:04:03.831 "task_count": 2048, 00:04:03.831 "sequence_count": 2048, 00:04:03.831 "buf_count": 2048 00:04:03.831 } 00:04:03.831 } 00:04:03.831 ] 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "subsystem": "bdev", 00:04:03.831 "config": [ 00:04:03.831 { 00:04:03.831 "method": "bdev_set_options", 00:04:03.831 "params": { 00:04:03.831 "bdev_io_pool_size": 65535, 00:04:03.831 "bdev_io_cache_size": 256, 00:04:03.831 "bdev_auto_examine": true, 00:04:03.831 "iobuf_small_cache_size": 128, 00:04:03.831 "iobuf_large_cache_size": 16 00:04:03.831 } 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "method": "bdev_raid_set_options", 00:04:03.831 "params": { 00:04:03.831 "process_window_size_kb": 1024, 00:04:03.831 "process_max_bandwidth_mb_sec": 0 00:04:03.831 } 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "method": "bdev_iscsi_set_options", 00:04:03.831 "params": { 00:04:03.831 "timeout_sec": 30 00:04:03.831 } 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "method": "bdev_nvme_set_options", 00:04:03.831 "params": { 00:04:03.831 "action_on_timeout": "none", 00:04:03.831 "timeout_us": 0, 00:04:03.831 "timeout_admin_us": 0, 00:04:03.831 "keep_alive_timeout_ms": 10000, 00:04:03.831 "arbitration_burst": 0, 00:04:03.831 "low_priority_weight": 0, 00:04:03.831 "medium_priority_weight": 0, 00:04:03.831 "high_priority_weight": 0, 00:04:03.831 "nvme_adminq_poll_period_us": 10000, 00:04:03.831 "nvme_ioq_poll_period_us": 0, 00:04:03.831 "io_queue_requests": 0, 00:04:03.831 "delay_cmd_submit": true, 00:04:03.831 "transport_retry_count": 4, 00:04:03.831 "bdev_retry_count": 3, 00:04:03.831 "transport_ack_timeout": 0, 00:04:03.831 "ctrlr_loss_timeout_sec": 0, 00:04:03.831 "reconnect_delay_sec": 0, 00:04:03.831 "fast_io_fail_timeout_sec": 0, 00:04:03.831 "disable_auto_failback": false, 00:04:03.831 "generate_uuids": false, 00:04:03.831 "transport_tos": 0, 00:04:03.831 "nvme_error_stat": false, 00:04:03.831 "rdma_srq_size": 0, 00:04:03.831 "io_path_stat": false, 00:04:03.831 "allow_accel_sequence": false, 00:04:03.831 "rdma_max_cq_size": 0, 00:04:03.831 "rdma_cm_event_timeout_ms": 0, 00:04:03.831 "dhchap_digests": [ 00:04:03.831 "sha256", 00:04:03.831 "sha384", 00:04:03.831 "sha512" 00:04:03.831 ], 00:04:03.831 "dhchap_dhgroups": [ 00:04:03.831 "null", 00:04:03.831 "ffdhe2048", 00:04:03.831 "ffdhe3072", 00:04:03.831 "ffdhe4096", 00:04:03.831 "ffdhe6144", 00:04:03.831 "ffdhe8192" 00:04:03.831 ] 00:04:03.831 } 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "method": "bdev_nvme_set_hotplug", 00:04:03.831 "params": { 00:04:03.831 "period_us": 100000, 00:04:03.831 "enable": false 00:04:03.831 } 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "method": "bdev_wait_for_examine" 00:04:03.831 } 00:04:03.831 ] 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "subsystem": "scsi", 00:04:03.831 "config": null 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "subsystem": "scheduler", 00:04:03.831 "config": [ 00:04:03.831 { 00:04:03.831 "method": "framework_set_scheduler", 00:04:03.831 "params": { 00:04:03.831 "name": "static" 00:04:03.831 } 00:04:03.831 } 00:04:03.831 ] 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "subsystem": "vhost_scsi", 00:04:03.831 "config": [] 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "subsystem": "vhost_blk", 00:04:03.831 "config": [] 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "subsystem": "ublk", 00:04:03.831 "config": [] 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "subsystem": "nbd", 00:04:03.831 "config": [] 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "subsystem": "nvmf", 00:04:03.831 "config": [ 00:04:03.831 { 00:04:03.831 "method": "nvmf_set_config", 00:04:03.831 "params": { 00:04:03.831 "discovery_filter": "match_any", 00:04:03.831 "admin_cmd_passthru": { 00:04:03.831 "identify_ctrlr": false 00:04:03.831 }, 00:04:03.831 "dhchap_digests": [ 00:04:03.831 "sha256", 00:04:03.831 "sha384", 00:04:03.831 "sha512" 00:04:03.831 ], 00:04:03.831 "dhchap_dhgroups": [ 00:04:03.831 "null", 00:04:03.831 "ffdhe2048", 00:04:03.831 "ffdhe3072", 00:04:03.831 "ffdhe4096", 00:04:03.831 "ffdhe6144", 00:04:03.831 "ffdhe8192" 00:04:03.831 ] 00:04:03.831 } 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "method": "nvmf_set_max_subsystems", 00:04:03.831 "params": { 00:04:03.831 "max_subsystems": 1024 00:04:03.831 } 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "method": "nvmf_set_crdt", 00:04:03.831 "params": { 00:04:03.831 "crdt1": 0, 00:04:03.831 "crdt2": 0, 00:04:03.831 "crdt3": 0 00:04:03.831 } 00:04:03.831 }, 00:04:03.831 { 00:04:03.831 "method": "nvmf_create_transport", 00:04:03.831 "params": { 00:04:03.831 "trtype": "TCP", 00:04:03.831 "max_queue_depth": 128, 00:04:03.832 "max_io_qpairs_per_ctrlr": 127, 00:04:03.832 "in_capsule_data_size": 4096, 00:04:03.832 "max_io_size": 131072, 00:04:03.832 "io_unit_size": 131072, 00:04:03.832 "max_aq_depth": 128, 00:04:03.832 "num_shared_buffers": 511, 00:04:03.832 "buf_cache_size": 4294967295, 00:04:03.832 "dif_insert_or_strip": false, 00:04:03.832 "zcopy": false, 00:04:03.832 "c2h_success": true, 00:04:03.832 "sock_priority": 0, 00:04:03.832 "abort_timeout_sec": 1, 00:04:03.832 "ack_timeout": 0, 00:04:03.832 "data_wr_pool_size": 0 00:04:03.832 } 00:04:03.832 } 00:04:03.832 ] 00:04:03.832 }, 00:04:03.832 { 00:04:03.832 "subsystem": "iscsi", 00:04:03.832 "config": [ 00:04:03.832 { 00:04:03.832 "method": "iscsi_set_options", 00:04:03.832 "params": { 00:04:03.832 "node_base": "iqn.2016-06.io.spdk", 00:04:03.832 "max_sessions": 128, 00:04:03.832 "max_connections_per_session": 2, 00:04:03.832 "max_queue_depth": 64, 00:04:03.832 "default_time2wait": 2, 00:04:03.832 "default_time2retain": 20, 00:04:03.832 "first_burst_length": 8192, 00:04:03.832 "immediate_data": true, 00:04:03.832 "allow_duplicated_isid": false, 00:04:03.832 "error_recovery_level": 0, 00:04:03.832 "nop_timeout": 60, 00:04:03.832 "nop_in_interval": 30, 00:04:03.832 "disable_chap": false, 00:04:03.832 "require_chap": false, 00:04:03.832 "mutual_chap": false, 00:04:03.832 "chap_group": 0, 00:04:03.832 "max_large_datain_per_connection": 64, 00:04:03.832 "max_r2t_per_connection": 4, 00:04:03.832 "pdu_pool_size": 36864, 00:04:03.832 "immediate_data_pool_size": 16384, 00:04:03.832 "data_out_pool_size": 2048 00:04:03.832 } 00:04:03.832 } 00:04:03.832 ] 00:04:03.832 } 00:04:03.832 ] 00:04:03.832 } 00:04:03.832 13:45:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:03.832 13:45:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2168730 00:04:03.832 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2168730 ']' 00:04:03.832 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2168730 00:04:03.832 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:03.832 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:03.832 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2168730 00:04:03.832 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:03.832 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:03.832 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2168730' 00:04:03.832 killing process with pid 2168730 00:04:03.832 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2168730 00:04:03.832 13:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2168730 00:04:04.092 13:45:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2169028 00:04:04.092 13:45:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:04.092 13:45:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2169028 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2169028 ']' 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2169028 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2169028 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2169028' 00:04:09.375 killing process with pid 2169028 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2169028 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2169028 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:09.375 00:04:09.375 real 0m6.569s 00:04:09.375 user 0m6.489s 00:04:09.375 sys 0m0.550s 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:09.375 ************************************ 00:04:09.375 END TEST skip_rpc_with_json 00:04:09.375 ************************************ 00:04:09.375 13:45:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:09.375 13:45:55 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:09.375 13:45:55 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:09.375 13:45:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.375 ************************************ 00:04:09.375 START TEST skip_rpc_with_delay 00:04:09.375 ************************************ 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:09.375 [2024-11-06 13:45:55.583844] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:09.375 00:04:09.375 real 0m0.076s 00:04:09.375 user 0m0.052s 00:04:09.375 sys 0m0.023s 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:09.375 13:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:09.375 ************************************ 00:04:09.375 END TEST skip_rpc_with_delay 00:04:09.375 ************************************ 00:04:09.375 13:45:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:09.375 13:45:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:09.375 13:45:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:09.375 13:45:55 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:09.375 13:45:55 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:09.375 13:45:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.635 ************************************ 00:04:09.635 START TEST exit_on_failed_rpc_init 00:04:09.635 ************************************ 00:04:09.635 13:45:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:09.635 13:45:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2170142 00:04:09.635 13:45:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2170142 00:04:09.635 13:45:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:09.635 13:45:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 2170142 ']' 00:04:09.635 13:45:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.635 13:45:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:09.635 13:45:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.635 13:45:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:09.635 13:45:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:09.635 [2024-11-06 13:45:55.734761] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:09.635 [2024-11-06 13:45:55.734808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170142 ] 00:04:09.635 [2024-11-06 13:45:55.818167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.635 [2024-11-06 13:45:55.848134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:10.574 [2024-11-06 13:45:56.593238] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:10.574 [2024-11-06 13:45:56.593294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170217 ] 00:04:10.574 [2024-11-06 13:45:56.683261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.574 [2024-11-06 13:45:56.719042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.574 [2024-11-06 13:45:56.719095] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:10.574 [2024-11-06 13:45:56.719104] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:10.574 [2024-11-06 13:45:56.719112] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2170142 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 2170142 ']' 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 2170142 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2170142 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:10.574 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:10.575 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2170142' 00:04:10.575 killing process with pid 2170142 00:04:10.575 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 2170142 00:04:10.575 13:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 2170142 00:04:10.834 00:04:10.834 real 0m1.332s 00:04:10.834 user 0m1.573s 00:04:10.834 sys 0m0.373s 00:04:10.834 13:45:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:10.834 13:45:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:10.834 ************************************ 00:04:10.834 END TEST exit_on_failed_rpc_init 00:04:10.834 ************************************ 00:04:10.834 13:45:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:10.834 00:04:10.834 real 0m13.764s 00:04:10.834 user 0m13.342s 00:04:10.834 sys 0m1.572s 00:04:10.834 13:45:57 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:10.834 13:45:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.834 ************************************ 00:04:10.834 END TEST skip_rpc 00:04:10.834 ************************************ 00:04:10.834 13:45:57 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:10.834 13:45:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:10.834 13:45:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:10.834 13:45:57 -- common/autotest_common.sh@10 -- # set +x 00:04:11.094 ************************************ 00:04:11.094 START TEST rpc_client 00:04:11.094 ************************************ 00:04:11.094 13:45:57 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:11.094 * Looking for test storage... 00:04:11.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:11.094 13:45:57 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:11.094 13:45:57 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:11.094 13:45:57 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:11.094 13:45:57 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.094 13:45:57 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:11.095 13:45:57 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.095 13:45:57 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.095 13:45:57 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.095 13:45:57 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:11.095 13:45:57 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.095 13:45:57 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:11.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.095 --rc genhtml_branch_coverage=1 00:04:11.095 --rc genhtml_function_coverage=1 00:04:11.095 --rc genhtml_legend=1 00:04:11.095 --rc geninfo_all_blocks=1 00:04:11.095 --rc geninfo_unexecuted_blocks=1 00:04:11.095 00:04:11.095 ' 00:04:11.095 13:45:57 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:11.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.095 --rc genhtml_branch_coverage=1 00:04:11.095 --rc genhtml_function_coverage=1 00:04:11.095 --rc genhtml_legend=1 00:04:11.095 --rc geninfo_all_blocks=1 00:04:11.095 --rc geninfo_unexecuted_blocks=1 00:04:11.095 00:04:11.095 ' 00:04:11.095 13:45:57 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:11.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.095 --rc genhtml_branch_coverage=1 00:04:11.095 --rc genhtml_function_coverage=1 00:04:11.095 --rc genhtml_legend=1 00:04:11.095 --rc geninfo_all_blocks=1 00:04:11.095 --rc geninfo_unexecuted_blocks=1 00:04:11.095 00:04:11.095 ' 00:04:11.095 13:45:57 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:11.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.095 --rc genhtml_branch_coverage=1 00:04:11.095 --rc genhtml_function_coverage=1 00:04:11.095 --rc genhtml_legend=1 00:04:11.095 --rc geninfo_all_blocks=1 00:04:11.095 --rc geninfo_unexecuted_blocks=1 00:04:11.095 00:04:11.095 ' 00:04:11.095 13:45:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:11.095 OK 00:04:11.095 13:45:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:11.095 00:04:11.095 real 0m0.219s 00:04:11.095 user 0m0.135s 00:04:11.095 sys 0m0.099s 00:04:11.095 13:45:57 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:11.095 13:45:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:11.095 ************************************ 00:04:11.095 END TEST rpc_client 00:04:11.095 ************************************ 00:04:11.356 13:45:57 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:11.356 13:45:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:11.356 13:45:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:11.356 13:45:57 -- common/autotest_common.sh@10 -- # set +x 00:04:11.356 ************************************ 00:04:11.356 START TEST json_config 00:04:11.356 ************************************ 00:04:11.356 13:45:57 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:11.356 13:45:57 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:11.356 13:45:57 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:11.356 13:45:57 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:11.356 13:45:57 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:11.356 13:45:57 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.356 13:45:57 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.356 13:45:57 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.356 13:45:57 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.356 13:45:57 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.356 13:45:57 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.356 13:45:57 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.356 13:45:57 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.356 13:45:57 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.356 13:45:57 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.356 13:45:57 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.356 13:45:57 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:11.356 13:45:57 json_config -- scripts/common.sh@345 -- # : 1 00:04:11.356 13:45:57 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.356 13:45:57 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.356 13:45:57 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:11.356 13:45:57 json_config -- scripts/common.sh@353 -- # local d=1 00:04:11.356 13:45:57 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.356 13:45:57 json_config -- scripts/common.sh@355 -- # echo 1 00:04:11.356 13:45:57 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.356 13:45:57 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:11.356 13:45:57 json_config -- scripts/common.sh@353 -- # local d=2 00:04:11.356 13:45:57 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.356 13:45:57 json_config -- scripts/common.sh@355 -- # echo 2 00:04:11.356 13:45:57 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.356 13:45:57 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.356 13:45:57 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.356 13:45:57 json_config -- scripts/common.sh@368 -- # return 0 00:04:11.356 13:45:57 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.356 13:45:57 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:11.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.356 --rc genhtml_branch_coverage=1 00:04:11.356 --rc genhtml_function_coverage=1 00:04:11.356 --rc genhtml_legend=1 00:04:11.356 --rc geninfo_all_blocks=1 00:04:11.356 --rc geninfo_unexecuted_blocks=1 00:04:11.356 00:04:11.356 ' 00:04:11.356 13:45:57 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:11.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.356 --rc genhtml_branch_coverage=1 00:04:11.356 --rc genhtml_function_coverage=1 00:04:11.356 --rc genhtml_legend=1 00:04:11.356 --rc geninfo_all_blocks=1 00:04:11.356 --rc geninfo_unexecuted_blocks=1 00:04:11.356 00:04:11.356 ' 00:04:11.356 13:45:57 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:11.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.356 --rc genhtml_branch_coverage=1 00:04:11.356 --rc genhtml_function_coverage=1 00:04:11.356 --rc genhtml_legend=1 00:04:11.356 --rc geninfo_all_blocks=1 00:04:11.356 --rc geninfo_unexecuted_blocks=1 00:04:11.356 00:04:11.356 ' 00:04:11.356 13:45:57 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:11.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.356 --rc genhtml_branch_coverage=1 00:04:11.356 --rc genhtml_function_coverage=1 00:04:11.356 --rc genhtml_legend=1 00:04:11.356 --rc geninfo_all_blocks=1 00:04:11.356 --rc geninfo_unexecuted_blocks=1 00:04:11.356 00:04:11.356 ' 00:04:11.356 13:45:57 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:11.356 13:45:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:11.356 13:45:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:11.356 13:45:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:11.356 13:45:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:11.356 13:45:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:11.356 13:45:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:11.356 13:45:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:11.356 13:45:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:11.356 13:45:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:11.356 13:45:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:11.356 13:45:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:11.356 13:45:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:11.356 13:45:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:11.356 13:45:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:11.356 13:45:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:11.356 13:45:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:11.356 13:45:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:11.356 13:45:57 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:11.356 13:45:57 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:11.356 13:45:57 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:11.356 13:45:57 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:11.356 13:45:57 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:11.356 13:45:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.356 13:45:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.356 13:45:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.356 13:45:57 json_config -- paths/export.sh@5 -- # export PATH 00:04:11.357 13:45:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.357 13:45:57 json_config -- nvmf/common.sh@51 -- # : 0 00:04:11.357 13:45:57 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:11.357 13:45:57 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:11.357 13:45:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:11.357 13:45:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:11.357 13:45:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:11.357 13:45:57 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:11.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:11.357 13:45:57 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:11.357 13:45:57 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:11.357 13:45:57 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:11.357 13:45:57 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:11.617 INFO: JSON configuration test init 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:11.617 13:45:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:11.617 13:45:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:11.617 13:45:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:11.617 13:45:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.617 13:45:57 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:11.617 13:45:57 json_config -- json_config/common.sh@9 -- # local app=target 00:04:11.617 13:45:57 json_config -- json_config/common.sh@10 -- # shift 00:04:11.617 13:45:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:11.617 13:45:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:11.617 13:45:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:11.617 13:45:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.617 13:45:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.617 13:45:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2170620 00:04:11.617 13:45:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:11.617 Waiting for target to run... 00:04:11.617 13:45:57 json_config -- json_config/common.sh@25 -- # waitforlisten 2170620 /var/tmp/spdk_tgt.sock 00:04:11.617 13:45:57 json_config -- common/autotest_common.sh@833 -- # '[' -z 2170620 ']' 00:04:11.617 13:45:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:11.617 13:45:57 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:11.617 13:45:57 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:11.617 13:45:57 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:11.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:11.617 13:45:57 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:11.617 13:45:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.617 [2024-11-06 13:45:57.714372] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:11.617 [2024-11-06 13:45:57.714443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170620 ] 00:04:11.877 [2024-11-06 13:45:58.002355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.877 [2024-11-06 13:45:58.031359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.446 13:45:58 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:12.446 13:45:58 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:12.446 13:45:58 json_config -- json_config/common.sh@26 -- # echo '' 00:04:12.446 00:04:12.446 13:45:58 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:12.446 13:45:58 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:12.446 13:45:58 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:12.446 13:45:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.446 13:45:58 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:12.446 13:45:58 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:12.446 13:45:58 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:12.446 13:45:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.446 13:45:58 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:12.446 13:45:58 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:12.446 13:45:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:13.016 13:45:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:13.016 13:45:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:13.016 13:45:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@54 -- # sort 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:13.016 13:45:59 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:13.016 13:45:59 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:13.016 13:45:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.276 13:45:59 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:13.276 13:45:59 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:13.276 13:45:59 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:13.276 13:45:59 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:13.276 13:45:59 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:13.276 13:45:59 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:13.276 13:45:59 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:13.276 13:45:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:13.276 13:45:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.276 13:45:59 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:13.276 13:45:59 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:13.276 13:45:59 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:13.276 13:45:59 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:13.276 13:45:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:13.276 MallocForNvmf0 00:04:13.276 13:45:59 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:13.276 13:45:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:13.537 MallocForNvmf1 00:04:13.537 13:45:59 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:13.537 13:45:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:13.537 [2024-11-06 13:45:59.791712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:13.537 13:45:59 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:13.537 13:45:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:13.797 13:45:59 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:13.797 13:45:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:14.124 13:46:00 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:14.124 13:46:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:14.124 13:46:00 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:14.124 13:46:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:14.383 [2024-11-06 13:46:00.469812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:14.383 13:46:00 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:14.383 13:46:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:14.383 13:46:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.383 13:46:00 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:14.383 13:46:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:14.384 13:46:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.384 13:46:00 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:14.384 13:46:00 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:14.384 13:46:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:14.643 MallocBdevForConfigChangeCheck 00:04:14.643 13:46:00 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:14.643 13:46:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:14.643 13:46:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.643 13:46:00 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:14.643 13:46:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:14.902 13:46:01 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:14.902 INFO: shutting down applications... 00:04:14.902 13:46:01 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:14.902 13:46:01 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:14.902 13:46:01 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:14.902 13:46:01 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:15.471 Calling clear_iscsi_subsystem 00:04:15.471 Calling clear_nvmf_subsystem 00:04:15.471 Calling clear_nbd_subsystem 00:04:15.471 Calling clear_ublk_subsystem 00:04:15.471 Calling clear_vhost_blk_subsystem 00:04:15.471 Calling clear_vhost_scsi_subsystem 00:04:15.471 Calling clear_bdev_subsystem 00:04:15.471 13:46:01 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:15.471 13:46:01 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:15.471 13:46:01 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:15.471 13:46:01 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:15.471 13:46:01 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:15.471 13:46:01 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:15.731 13:46:01 json_config -- json_config/json_config.sh@352 -- # break 00:04:15.731 13:46:01 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:15.731 13:46:01 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:15.731 13:46:01 json_config -- json_config/common.sh@31 -- # local app=target 00:04:15.731 13:46:01 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:15.731 13:46:01 json_config -- json_config/common.sh@35 -- # [[ -n 2170620 ]] 00:04:15.731 13:46:01 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2170620 00:04:15.731 13:46:01 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:15.731 13:46:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:15.731 13:46:01 json_config -- json_config/common.sh@41 -- # kill -0 2170620 00:04:15.731 13:46:01 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:16.302 13:46:02 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:16.302 13:46:02 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:16.302 13:46:02 json_config -- json_config/common.sh@41 -- # kill -0 2170620 00:04:16.302 13:46:02 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:16.302 13:46:02 json_config -- json_config/common.sh@43 -- # break 00:04:16.302 13:46:02 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:16.302 13:46:02 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:16.302 SPDK target shutdown done 00:04:16.302 13:46:02 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:16.302 INFO: relaunching applications... 00:04:16.302 13:46:02 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.302 13:46:02 json_config -- json_config/common.sh@9 -- # local app=target 00:04:16.302 13:46:02 json_config -- json_config/common.sh@10 -- # shift 00:04:16.302 13:46:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:16.302 13:46:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:16.302 13:46:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:16.302 13:46:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:16.302 13:46:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:16.302 13:46:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2171754 00:04:16.302 13:46:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:16.302 Waiting for target to run... 00:04:16.302 13:46:02 json_config -- json_config/common.sh@25 -- # waitforlisten 2171754 /var/tmp/spdk_tgt.sock 00:04:16.302 13:46:02 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.302 13:46:02 json_config -- common/autotest_common.sh@833 -- # '[' -z 2171754 ']' 00:04:16.302 13:46:02 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:16.302 13:46:02 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:16.302 13:46:02 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:16.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:16.302 13:46:02 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:16.302 13:46:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.302 [2024-11-06 13:46:02.491848] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:16.302 [2024-11-06 13:46:02.491911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2171754 ] 00:04:16.872 [2024-11-06 13:46:02.893260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.872 [2024-11-06 13:46:02.918304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.443 [2024-11-06 13:46:03.422238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:17.443 [2024-11-06 13:46:03.454591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:17.443 13:46:03 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:17.443 13:46:03 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:17.443 13:46:03 json_config -- json_config/common.sh@26 -- # echo '' 00:04:17.443 00:04:17.443 13:46:03 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:17.443 13:46:03 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:17.443 INFO: Checking if target configuration is the same... 00:04:17.443 13:46:03 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:17.443 13:46:03 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.443 13:46:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:17.443 + '[' 2 -ne 2 ']' 00:04:17.443 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:17.443 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:17.443 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:17.443 +++ basename /dev/fd/62 00:04:17.443 ++ mktemp /tmp/62.XXX 00:04:17.443 + tmp_file_1=/tmp/62.9qO 00:04:17.443 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.443 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:17.443 + tmp_file_2=/tmp/spdk_tgt_config.json.uI3 00:04:17.443 + ret=0 00:04:17.443 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:17.703 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:17.703 + diff -u /tmp/62.9qO /tmp/spdk_tgt_config.json.uI3 00:04:17.703 + echo 'INFO: JSON config files are the same' 00:04:17.703 INFO: JSON config files are the same 00:04:17.703 + rm /tmp/62.9qO /tmp/spdk_tgt_config.json.uI3 00:04:17.703 + exit 0 00:04:17.703 13:46:03 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:17.703 13:46:03 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:17.703 INFO: changing configuration and checking if this can be detected... 00:04:17.703 13:46:03 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:17.703 13:46:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:17.964 13:46:04 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.964 13:46:04 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:17.964 13:46:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:17.964 + '[' 2 -ne 2 ']' 00:04:17.964 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:17.964 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:17.964 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:17.964 +++ basename /dev/fd/62 00:04:17.964 ++ mktemp /tmp/62.XXX 00:04:17.964 + tmp_file_1=/tmp/62.Kfq 00:04:17.964 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.964 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:17.964 + tmp_file_2=/tmp/spdk_tgt_config.json.2DX 00:04:17.964 + ret=0 00:04:17.964 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:18.225 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:18.225 + diff -u /tmp/62.Kfq /tmp/spdk_tgt_config.json.2DX 00:04:18.225 + ret=1 00:04:18.225 + echo '=== Start of file: /tmp/62.Kfq ===' 00:04:18.225 + cat /tmp/62.Kfq 00:04:18.225 + echo '=== End of file: /tmp/62.Kfq ===' 00:04:18.225 + echo '' 00:04:18.225 + echo '=== Start of file: /tmp/spdk_tgt_config.json.2DX ===' 00:04:18.225 + cat /tmp/spdk_tgt_config.json.2DX 00:04:18.225 + echo '=== End of file: /tmp/spdk_tgt_config.json.2DX ===' 00:04:18.225 + echo '' 00:04:18.225 + rm /tmp/62.Kfq /tmp/spdk_tgt_config.json.2DX 00:04:18.225 + exit 1 00:04:18.225 13:46:04 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:18.225 INFO: configuration change detected. 00:04:18.225 13:46:04 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:18.225 13:46:04 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:18.225 13:46:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:18.225 13:46:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.225 13:46:04 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:18.225 13:46:04 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:18.225 13:46:04 json_config -- json_config/json_config.sh@324 -- # [[ -n 2171754 ]] 00:04:18.225 13:46:04 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:18.225 13:46:04 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:18.225 13:46:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:18.225 13:46:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.225 13:46:04 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:18.225 13:46:04 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:18.225 13:46:04 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:18.225 13:46:04 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:18.225 13:46:04 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:18.225 13:46:04 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:18.225 13:46:04 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:18.225 13:46:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.484 13:46:04 json_config -- json_config/json_config.sh@330 -- # killprocess 2171754 00:04:18.484 13:46:04 json_config -- common/autotest_common.sh@952 -- # '[' -z 2171754 ']' 00:04:18.484 13:46:04 json_config -- common/autotest_common.sh@956 -- # kill -0 2171754 00:04:18.484 13:46:04 json_config -- common/autotest_common.sh@957 -- # uname 00:04:18.484 13:46:04 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:18.484 13:46:04 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2171754 00:04:18.484 13:46:04 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:18.484 13:46:04 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:18.484 13:46:04 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2171754' 00:04:18.484 killing process with pid 2171754 00:04:18.484 13:46:04 json_config -- common/autotest_common.sh@971 -- # kill 2171754 00:04:18.484 13:46:04 json_config -- common/autotest_common.sh@976 -- # wait 2171754 00:04:18.744 13:46:04 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.744 13:46:04 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:18.744 13:46:04 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:18.744 13:46:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.744 13:46:04 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:18.744 13:46:04 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:18.744 INFO: Success 00:04:18.744 00:04:18.744 real 0m7.460s 00:04:18.744 user 0m8.897s 00:04:18.744 sys 0m2.070s 00:04:18.744 13:46:04 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:18.744 13:46:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.744 ************************************ 00:04:18.744 END TEST json_config 00:04:18.744 ************************************ 00:04:18.744 13:46:04 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:18.744 13:46:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:18.744 13:46:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:18.744 13:46:04 -- common/autotest_common.sh@10 -- # set +x 00:04:18.744 ************************************ 00:04:18.744 START TEST json_config_extra_key 00:04:18.744 ************************************ 00:04:18.745 13:46:04 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:19.005 13:46:05 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:19.005 13:46:05 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:19.005 13:46:05 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:19.005 13:46:05 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.005 13:46:05 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:19.005 13:46:05 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.005 13:46:05 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:19.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.005 --rc genhtml_branch_coverage=1 00:04:19.005 --rc genhtml_function_coverage=1 00:04:19.005 --rc genhtml_legend=1 00:04:19.005 --rc geninfo_all_blocks=1 00:04:19.005 --rc geninfo_unexecuted_blocks=1 00:04:19.005 00:04:19.005 ' 00:04:19.005 13:46:05 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:19.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.005 --rc genhtml_branch_coverage=1 00:04:19.005 --rc genhtml_function_coverage=1 00:04:19.005 --rc genhtml_legend=1 00:04:19.005 --rc geninfo_all_blocks=1 00:04:19.005 --rc geninfo_unexecuted_blocks=1 00:04:19.005 00:04:19.005 ' 00:04:19.005 13:46:05 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:19.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.005 --rc genhtml_branch_coverage=1 00:04:19.005 --rc genhtml_function_coverage=1 00:04:19.005 --rc genhtml_legend=1 00:04:19.005 --rc geninfo_all_blocks=1 00:04:19.005 --rc geninfo_unexecuted_blocks=1 00:04:19.005 00:04:19.005 ' 00:04:19.005 13:46:05 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:19.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.005 --rc genhtml_branch_coverage=1 00:04:19.005 --rc genhtml_function_coverage=1 00:04:19.005 --rc genhtml_legend=1 00:04:19.005 --rc geninfo_all_blocks=1 00:04:19.005 --rc geninfo_unexecuted_blocks=1 00:04:19.005 00:04:19.005 ' 00:04:19.005 13:46:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:19.005 13:46:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:19.005 13:46:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:19.005 13:46:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:19.005 13:46:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:19.005 13:46:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:19.005 13:46:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:19.006 13:46:05 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:19.006 13:46:05 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:19.006 13:46:05 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.006 13:46:05 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.006 13:46:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.006 13:46:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.006 13:46:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.006 13:46:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:19.006 13:46:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:19.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:19.006 13:46:05 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:19.006 13:46:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:19.006 13:46:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:19.006 13:46:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:19.006 13:46:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:19.006 13:46:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:19.006 13:46:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:19.006 13:46:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:19.006 13:46:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:19.006 13:46:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:19.006 13:46:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:19.006 13:46:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:19.006 INFO: launching applications... 00:04:19.006 13:46:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:19.006 13:46:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:19.006 13:46:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:19.006 13:46:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:19.006 13:46:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:19.006 13:46:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:19.006 13:46:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.006 13:46:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.006 13:46:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2172366 00:04:19.006 13:46:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:19.006 Waiting for target to run... 00:04:19.006 13:46:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2172366 /var/tmp/spdk_tgt.sock 00:04:19.006 13:46:05 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 2172366 ']' 00:04:19.006 13:46:05 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:19.006 13:46:05 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:19.006 13:46:05 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:19.006 13:46:05 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:19.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:19.006 13:46:05 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:19.006 13:46:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:19.006 [2024-11-06 13:46:05.240617] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:19.006 [2024-11-06 13:46:05.240694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172366 ] 00:04:19.576 [2024-11-06 13:46:05.630152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.577 [2024-11-06 13:46:05.654683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.836 13:46:06 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:19.836 13:46:06 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:19.836 13:46:06 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:19.836 00:04:19.836 13:46:06 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:19.836 INFO: shutting down applications... 00:04:19.836 13:46:06 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:19.836 13:46:06 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:19.836 13:46:06 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:19.836 13:46:06 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2172366 ]] 00:04:19.836 13:46:06 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2172366 00:04:19.836 13:46:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:19.836 13:46:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:19.836 13:46:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2172366 00:04:19.836 13:46:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:20.407 13:46:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:20.407 13:46:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.407 13:46:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2172366 00:04:20.407 13:46:06 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:20.407 13:46:06 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:20.407 13:46:06 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:20.407 13:46:06 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:20.407 SPDK target shutdown done 00:04:20.407 13:46:06 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:20.407 Success 00:04:20.407 00:04:20.407 real 0m1.585s 00:04:20.407 user 0m1.115s 00:04:20.407 sys 0m0.506s 00:04:20.407 13:46:06 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:20.407 13:46:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:20.407 ************************************ 00:04:20.407 END TEST json_config_extra_key 00:04:20.407 ************************************ 00:04:20.407 13:46:06 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:20.407 13:46:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:20.407 13:46:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:20.407 13:46:06 -- common/autotest_common.sh@10 -- # set +x 00:04:20.407 ************************************ 00:04:20.407 START TEST alias_rpc 00:04:20.407 ************************************ 00:04:20.407 13:46:06 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:20.667 * Looking for test storage... 00:04:20.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:20.667 13:46:06 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:20.667 13:46:06 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:20.667 13:46:06 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:20.667 13:46:06 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.667 13:46:06 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:20.667 13:46:06 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.667 13:46:06 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:20.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.667 --rc genhtml_branch_coverage=1 00:04:20.667 --rc genhtml_function_coverage=1 00:04:20.667 --rc genhtml_legend=1 00:04:20.667 --rc geninfo_all_blocks=1 00:04:20.667 --rc geninfo_unexecuted_blocks=1 00:04:20.667 00:04:20.667 ' 00:04:20.667 13:46:06 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:20.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.667 --rc genhtml_branch_coverage=1 00:04:20.667 --rc genhtml_function_coverage=1 00:04:20.667 --rc genhtml_legend=1 00:04:20.667 --rc geninfo_all_blocks=1 00:04:20.667 --rc geninfo_unexecuted_blocks=1 00:04:20.667 00:04:20.667 ' 00:04:20.667 13:46:06 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:20.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.667 --rc genhtml_branch_coverage=1 00:04:20.667 --rc genhtml_function_coverage=1 00:04:20.667 --rc genhtml_legend=1 00:04:20.667 --rc geninfo_all_blocks=1 00:04:20.667 --rc geninfo_unexecuted_blocks=1 00:04:20.667 00:04:20.667 ' 00:04:20.667 13:46:06 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:20.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.667 --rc genhtml_branch_coverage=1 00:04:20.667 --rc genhtml_function_coverage=1 00:04:20.667 --rc genhtml_legend=1 00:04:20.667 --rc geninfo_all_blocks=1 00:04:20.667 --rc geninfo_unexecuted_blocks=1 00:04:20.667 00:04:20.667 ' 00:04:20.667 13:46:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:20.667 13:46:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2172712 00:04:20.667 13:46:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2172712 00:04:20.667 13:46:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.667 13:46:06 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 2172712 ']' 00:04:20.668 13:46:06 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.668 13:46:06 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:20.668 13:46:06 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.668 13:46:06 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:20.668 13:46:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.668 [2024-11-06 13:46:06.898647] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:20.668 [2024-11-06 13:46:06.898722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172712 ] 00:04:20.927 [2024-11-06 13:46:06.987954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.927 [2024-11-06 13:46:07.027247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.497 13:46:07 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:21.497 13:46:07 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:21.497 13:46:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:21.757 13:46:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2172712 00:04:21.757 13:46:07 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 2172712 ']' 00:04:21.757 13:46:07 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 2172712 00:04:21.757 13:46:07 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:21.757 13:46:07 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:21.757 13:46:07 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2172712 00:04:21.757 13:46:07 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:21.757 13:46:07 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:21.757 13:46:07 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2172712' 00:04:21.757 killing process with pid 2172712 00:04:21.757 13:46:07 alias_rpc -- common/autotest_common.sh@971 -- # kill 2172712 00:04:21.757 13:46:07 alias_rpc -- common/autotest_common.sh@976 -- # wait 2172712 00:04:22.018 00:04:22.018 real 0m1.512s 00:04:22.018 user 0m1.627s 00:04:22.018 sys 0m0.454s 00:04:22.018 13:46:08 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:22.018 13:46:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.018 ************************************ 00:04:22.018 END TEST alias_rpc 00:04:22.018 ************************************ 00:04:22.018 13:46:08 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:22.018 13:46:08 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:22.018 13:46:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:22.018 13:46:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:22.018 13:46:08 -- common/autotest_common.sh@10 -- # set +x 00:04:22.018 ************************************ 00:04:22.018 START TEST spdkcli_tcp 00:04:22.018 ************************************ 00:04:22.018 13:46:08 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:22.279 * Looking for test storage... 00:04:22.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:22.279 13:46:08 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:22.279 13:46:08 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:22.279 13:46:08 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:22.279 13:46:08 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.279 13:46:08 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:22.279 13:46:08 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.279 13:46:08 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:22.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.279 --rc genhtml_branch_coverage=1 00:04:22.279 --rc genhtml_function_coverage=1 00:04:22.279 --rc genhtml_legend=1 00:04:22.279 --rc geninfo_all_blocks=1 00:04:22.279 --rc geninfo_unexecuted_blocks=1 00:04:22.279 00:04:22.279 ' 00:04:22.279 13:46:08 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:22.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.279 --rc genhtml_branch_coverage=1 00:04:22.279 --rc genhtml_function_coverage=1 00:04:22.279 --rc genhtml_legend=1 00:04:22.279 --rc geninfo_all_blocks=1 00:04:22.279 --rc geninfo_unexecuted_blocks=1 00:04:22.279 00:04:22.279 ' 00:04:22.279 13:46:08 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:22.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.279 --rc genhtml_branch_coverage=1 00:04:22.279 --rc genhtml_function_coverage=1 00:04:22.279 --rc genhtml_legend=1 00:04:22.279 --rc geninfo_all_blocks=1 00:04:22.279 --rc geninfo_unexecuted_blocks=1 00:04:22.279 00:04:22.279 ' 00:04:22.279 13:46:08 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:22.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.279 --rc genhtml_branch_coverage=1 00:04:22.279 --rc genhtml_function_coverage=1 00:04:22.279 --rc genhtml_legend=1 00:04:22.279 --rc geninfo_all_blocks=1 00:04:22.279 --rc geninfo_unexecuted_blocks=1 00:04:22.279 00:04:22.279 ' 00:04:22.279 13:46:08 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:22.279 13:46:08 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:22.279 13:46:08 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:22.279 13:46:08 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:22.279 13:46:08 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:22.279 13:46:08 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:22.279 13:46:08 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:22.279 13:46:08 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.279 13:46:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:22.279 13:46:08 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2173035 00:04:22.279 13:46:08 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2173035 00:04:22.279 13:46:08 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:22.279 13:46:08 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 2173035 ']' 00:04:22.279 13:46:08 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.279 13:46:08 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:22.279 13:46:08 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.279 13:46:08 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:22.279 13:46:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:22.279 [2024-11-06 13:46:08.489718] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:22.279 [2024-11-06 13:46:08.489811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2173035 ] 00:04:22.540 [2024-11-06 13:46:08.578810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:22.540 [2024-11-06 13:46:08.615755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.540 [2024-11-06 13:46:08.615768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.114 13:46:09 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:23.114 13:46:09 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:23.114 13:46:09 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2173343 00:04:23.114 13:46:09 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:23.114 13:46:09 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:23.374 [ 00:04:23.374 "bdev_malloc_delete", 00:04:23.374 "bdev_malloc_create", 00:04:23.374 "bdev_null_resize", 00:04:23.374 "bdev_null_delete", 00:04:23.375 "bdev_null_create", 00:04:23.375 "bdev_nvme_cuse_unregister", 00:04:23.375 "bdev_nvme_cuse_register", 00:04:23.375 "bdev_opal_new_user", 00:04:23.375 "bdev_opal_set_lock_state", 00:04:23.375 "bdev_opal_delete", 00:04:23.375 "bdev_opal_get_info", 00:04:23.375 "bdev_opal_create", 00:04:23.375 "bdev_nvme_opal_revert", 00:04:23.375 "bdev_nvme_opal_init", 00:04:23.375 "bdev_nvme_send_cmd", 00:04:23.375 "bdev_nvme_set_keys", 00:04:23.375 "bdev_nvme_get_path_iostat", 00:04:23.375 "bdev_nvme_get_mdns_discovery_info", 00:04:23.375 "bdev_nvme_stop_mdns_discovery", 00:04:23.375 "bdev_nvme_start_mdns_discovery", 00:04:23.375 "bdev_nvme_set_multipath_policy", 00:04:23.375 "bdev_nvme_set_preferred_path", 00:04:23.375 "bdev_nvme_get_io_paths", 00:04:23.375 "bdev_nvme_remove_error_injection", 00:04:23.375 "bdev_nvme_add_error_injection", 00:04:23.375 "bdev_nvme_get_discovery_info", 00:04:23.375 "bdev_nvme_stop_discovery", 00:04:23.375 "bdev_nvme_start_discovery", 00:04:23.375 "bdev_nvme_get_controller_health_info", 00:04:23.375 "bdev_nvme_disable_controller", 00:04:23.375 "bdev_nvme_enable_controller", 00:04:23.375 "bdev_nvme_reset_controller", 00:04:23.375 "bdev_nvme_get_transport_statistics", 00:04:23.375 "bdev_nvme_apply_firmware", 00:04:23.375 "bdev_nvme_detach_controller", 00:04:23.375 "bdev_nvme_get_controllers", 00:04:23.375 "bdev_nvme_attach_controller", 00:04:23.375 "bdev_nvme_set_hotplug", 00:04:23.375 "bdev_nvme_set_options", 00:04:23.375 "bdev_passthru_delete", 00:04:23.375 "bdev_passthru_create", 00:04:23.375 "bdev_lvol_set_parent_bdev", 00:04:23.375 "bdev_lvol_set_parent", 00:04:23.375 "bdev_lvol_check_shallow_copy", 00:04:23.375 "bdev_lvol_start_shallow_copy", 00:04:23.375 "bdev_lvol_grow_lvstore", 00:04:23.375 "bdev_lvol_get_lvols", 00:04:23.375 "bdev_lvol_get_lvstores", 00:04:23.375 "bdev_lvol_delete", 00:04:23.375 "bdev_lvol_set_read_only", 00:04:23.375 "bdev_lvol_resize", 00:04:23.375 "bdev_lvol_decouple_parent", 00:04:23.375 "bdev_lvol_inflate", 00:04:23.375 "bdev_lvol_rename", 00:04:23.375 "bdev_lvol_clone_bdev", 00:04:23.375 "bdev_lvol_clone", 00:04:23.375 "bdev_lvol_snapshot", 00:04:23.375 "bdev_lvol_create", 00:04:23.375 "bdev_lvol_delete_lvstore", 00:04:23.375 "bdev_lvol_rename_lvstore", 00:04:23.375 "bdev_lvol_create_lvstore", 00:04:23.375 "bdev_raid_set_options", 00:04:23.375 "bdev_raid_remove_base_bdev", 00:04:23.375 "bdev_raid_add_base_bdev", 00:04:23.375 "bdev_raid_delete", 00:04:23.375 "bdev_raid_create", 00:04:23.375 "bdev_raid_get_bdevs", 00:04:23.375 "bdev_error_inject_error", 00:04:23.375 "bdev_error_delete", 00:04:23.375 "bdev_error_create", 00:04:23.375 "bdev_split_delete", 00:04:23.375 "bdev_split_create", 00:04:23.375 "bdev_delay_delete", 00:04:23.375 "bdev_delay_create", 00:04:23.375 "bdev_delay_update_latency", 00:04:23.375 "bdev_zone_block_delete", 00:04:23.375 "bdev_zone_block_create", 00:04:23.375 "blobfs_create", 00:04:23.375 "blobfs_detect", 00:04:23.375 "blobfs_set_cache_size", 00:04:23.375 "bdev_aio_delete", 00:04:23.375 "bdev_aio_rescan", 00:04:23.375 "bdev_aio_create", 00:04:23.375 "bdev_ftl_set_property", 00:04:23.375 "bdev_ftl_get_properties", 00:04:23.375 "bdev_ftl_get_stats", 00:04:23.375 "bdev_ftl_unmap", 00:04:23.375 "bdev_ftl_unload", 00:04:23.375 "bdev_ftl_delete", 00:04:23.375 "bdev_ftl_load", 00:04:23.375 "bdev_ftl_create", 00:04:23.375 "bdev_virtio_attach_controller", 00:04:23.375 "bdev_virtio_scsi_get_devices", 00:04:23.375 "bdev_virtio_detach_controller", 00:04:23.375 "bdev_virtio_blk_set_hotplug", 00:04:23.375 "bdev_iscsi_delete", 00:04:23.375 "bdev_iscsi_create", 00:04:23.375 "bdev_iscsi_set_options", 00:04:23.375 "accel_error_inject_error", 00:04:23.375 "ioat_scan_accel_module", 00:04:23.375 "dsa_scan_accel_module", 00:04:23.375 "iaa_scan_accel_module", 00:04:23.375 "vfu_virtio_create_fs_endpoint", 00:04:23.375 "vfu_virtio_create_scsi_endpoint", 00:04:23.375 "vfu_virtio_scsi_remove_target", 00:04:23.375 "vfu_virtio_scsi_add_target", 00:04:23.375 "vfu_virtio_create_blk_endpoint", 00:04:23.375 "vfu_virtio_delete_endpoint", 00:04:23.375 "keyring_file_remove_key", 00:04:23.375 "keyring_file_add_key", 00:04:23.375 "keyring_linux_set_options", 00:04:23.375 "fsdev_aio_delete", 00:04:23.375 "fsdev_aio_create", 00:04:23.375 "iscsi_get_histogram", 00:04:23.375 "iscsi_enable_histogram", 00:04:23.375 "iscsi_set_options", 00:04:23.375 "iscsi_get_auth_groups", 00:04:23.375 "iscsi_auth_group_remove_secret", 00:04:23.375 "iscsi_auth_group_add_secret", 00:04:23.375 "iscsi_delete_auth_group", 00:04:23.375 "iscsi_create_auth_group", 00:04:23.375 "iscsi_set_discovery_auth", 00:04:23.375 "iscsi_get_options", 00:04:23.375 "iscsi_target_node_request_logout", 00:04:23.375 "iscsi_target_node_set_redirect", 00:04:23.375 "iscsi_target_node_set_auth", 00:04:23.375 "iscsi_target_node_add_lun", 00:04:23.375 "iscsi_get_stats", 00:04:23.375 "iscsi_get_connections", 00:04:23.375 "iscsi_portal_group_set_auth", 00:04:23.375 "iscsi_start_portal_group", 00:04:23.375 "iscsi_delete_portal_group", 00:04:23.375 "iscsi_create_portal_group", 00:04:23.375 "iscsi_get_portal_groups", 00:04:23.375 "iscsi_delete_target_node", 00:04:23.375 "iscsi_target_node_remove_pg_ig_maps", 00:04:23.375 "iscsi_target_node_add_pg_ig_maps", 00:04:23.375 "iscsi_create_target_node", 00:04:23.375 "iscsi_get_target_nodes", 00:04:23.375 "iscsi_delete_initiator_group", 00:04:23.375 "iscsi_initiator_group_remove_initiators", 00:04:23.375 "iscsi_initiator_group_add_initiators", 00:04:23.375 "iscsi_create_initiator_group", 00:04:23.375 "iscsi_get_initiator_groups", 00:04:23.375 "nvmf_set_crdt", 00:04:23.375 "nvmf_set_config", 00:04:23.375 "nvmf_set_max_subsystems", 00:04:23.375 "nvmf_stop_mdns_prr", 00:04:23.375 "nvmf_publish_mdns_prr", 00:04:23.375 "nvmf_subsystem_get_listeners", 00:04:23.375 "nvmf_subsystem_get_qpairs", 00:04:23.375 "nvmf_subsystem_get_controllers", 00:04:23.375 "nvmf_get_stats", 00:04:23.375 "nvmf_get_transports", 00:04:23.375 "nvmf_create_transport", 00:04:23.375 "nvmf_get_targets", 00:04:23.375 "nvmf_delete_target", 00:04:23.375 "nvmf_create_target", 00:04:23.375 "nvmf_subsystem_allow_any_host", 00:04:23.375 "nvmf_subsystem_set_keys", 00:04:23.375 "nvmf_subsystem_remove_host", 00:04:23.375 "nvmf_subsystem_add_host", 00:04:23.375 "nvmf_ns_remove_host", 00:04:23.375 "nvmf_ns_add_host", 00:04:23.375 "nvmf_subsystem_remove_ns", 00:04:23.375 "nvmf_subsystem_set_ns_ana_group", 00:04:23.375 "nvmf_subsystem_add_ns", 00:04:23.375 "nvmf_subsystem_listener_set_ana_state", 00:04:23.375 "nvmf_discovery_get_referrals", 00:04:23.375 "nvmf_discovery_remove_referral", 00:04:23.375 "nvmf_discovery_add_referral", 00:04:23.375 "nvmf_subsystem_remove_listener", 00:04:23.375 "nvmf_subsystem_add_listener", 00:04:23.375 "nvmf_delete_subsystem", 00:04:23.375 "nvmf_create_subsystem", 00:04:23.375 "nvmf_get_subsystems", 00:04:23.375 "env_dpdk_get_mem_stats", 00:04:23.375 "nbd_get_disks", 00:04:23.375 "nbd_stop_disk", 00:04:23.375 "nbd_start_disk", 00:04:23.375 "ublk_recover_disk", 00:04:23.375 "ublk_get_disks", 00:04:23.375 "ublk_stop_disk", 00:04:23.375 "ublk_start_disk", 00:04:23.375 "ublk_destroy_target", 00:04:23.375 "ublk_create_target", 00:04:23.375 "virtio_blk_create_transport", 00:04:23.375 "virtio_blk_get_transports", 00:04:23.375 "vhost_controller_set_coalescing", 00:04:23.375 "vhost_get_controllers", 00:04:23.375 "vhost_delete_controller", 00:04:23.375 "vhost_create_blk_controller", 00:04:23.375 "vhost_scsi_controller_remove_target", 00:04:23.375 "vhost_scsi_controller_add_target", 00:04:23.375 "vhost_start_scsi_controller", 00:04:23.375 "vhost_create_scsi_controller", 00:04:23.375 "thread_set_cpumask", 00:04:23.375 "scheduler_set_options", 00:04:23.375 "framework_get_governor", 00:04:23.375 "framework_get_scheduler", 00:04:23.375 "framework_set_scheduler", 00:04:23.375 "framework_get_reactors", 00:04:23.375 "thread_get_io_channels", 00:04:23.375 "thread_get_pollers", 00:04:23.375 "thread_get_stats", 00:04:23.375 "framework_monitor_context_switch", 00:04:23.375 "spdk_kill_instance", 00:04:23.375 "log_enable_timestamps", 00:04:23.375 "log_get_flags", 00:04:23.375 "log_clear_flag", 00:04:23.375 "log_set_flag", 00:04:23.375 "log_get_level", 00:04:23.375 "log_set_level", 00:04:23.375 "log_get_print_level", 00:04:23.375 "log_set_print_level", 00:04:23.375 "framework_enable_cpumask_locks", 00:04:23.375 "framework_disable_cpumask_locks", 00:04:23.375 "framework_wait_init", 00:04:23.375 "framework_start_init", 00:04:23.375 "scsi_get_devices", 00:04:23.375 "bdev_get_histogram", 00:04:23.375 "bdev_enable_histogram", 00:04:23.375 "bdev_set_qos_limit", 00:04:23.375 "bdev_set_qd_sampling_period", 00:04:23.375 "bdev_get_bdevs", 00:04:23.375 "bdev_reset_iostat", 00:04:23.375 "bdev_get_iostat", 00:04:23.375 "bdev_examine", 00:04:23.375 "bdev_wait_for_examine", 00:04:23.375 "bdev_set_options", 00:04:23.375 "accel_get_stats", 00:04:23.375 "accel_set_options", 00:04:23.375 "accel_set_driver", 00:04:23.375 "accel_crypto_key_destroy", 00:04:23.375 "accel_crypto_keys_get", 00:04:23.375 "accel_crypto_key_create", 00:04:23.375 "accel_assign_opc", 00:04:23.375 "accel_get_module_info", 00:04:23.375 "accel_get_opc_assignments", 00:04:23.375 "vmd_rescan", 00:04:23.375 "vmd_remove_device", 00:04:23.375 "vmd_enable", 00:04:23.375 "sock_get_default_impl", 00:04:23.375 "sock_set_default_impl", 00:04:23.375 "sock_impl_set_options", 00:04:23.376 "sock_impl_get_options", 00:04:23.376 "iobuf_get_stats", 00:04:23.376 "iobuf_set_options", 00:04:23.376 "keyring_get_keys", 00:04:23.376 "vfu_tgt_set_base_path", 00:04:23.376 "framework_get_pci_devices", 00:04:23.376 "framework_get_config", 00:04:23.376 "framework_get_subsystems", 00:04:23.376 "fsdev_set_opts", 00:04:23.376 "fsdev_get_opts", 00:04:23.376 "trace_get_info", 00:04:23.376 "trace_get_tpoint_group_mask", 00:04:23.376 "trace_disable_tpoint_group", 00:04:23.376 "trace_enable_tpoint_group", 00:04:23.376 "trace_clear_tpoint_mask", 00:04:23.376 "trace_set_tpoint_mask", 00:04:23.376 "notify_get_notifications", 00:04:23.376 "notify_get_types", 00:04:23.376 "spdk_get_version", 00:04:23.376 "rpc_get_methods" 00:04:23.376 ] 00:04:23.376 13:46:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:23.376 13:46:09 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:23.376 13:46:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.376 13:46:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:23.376 13:46:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2173035 00:04:23.376 13:46:09 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 2173035 ']' 00:04:23.376 13:46:09 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 2173035 00:04:23.376 13:46:09 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:23.376 13:46:09 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:23.376 13:46:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2173035 00:04:23.376 13:46:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:23.376 13:46:09 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:23.376 13:46:09 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2173035' 00:04:23.376 killing process with pid 2173035 00:04:23.376 13:46:09 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 2173035 00:04:23.376 13:46:09 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 2173035 00:04:23.636 00:04:23.636 real 0m1.532s 00:04:23.636 user 0m2.777s 00:04:23.636 sys 0m0.470s 00:04:23.636 13:46:09 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:23.636 13:46:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.636 ************************************ 00:04:23.636 END TEST spdkcli_tcp 00:04:23.636 ************************************ 00:04:23.636 13:46:09 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:23.636 13:46:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:23.636 13:46:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:23.636 13:46:09 -- common/autotest_common.sh@10 -- # set +x 00:04:23.636 ************************************ 00:04:23.636 START TEST dpdk_mem_utility 00:04:23.636 ************************************ 00:04:23.636 13:46:09 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:23.897 * Looking for test storage... 00:04:23.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:23.897 13:46:09 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:23.897 13:46:09 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:23.897 13:46:09 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:23.897 13:46:10 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.897 13:46:10 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:23.897 13:46:10 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.897 13:46:10 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:23.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.897 --rc genhtml_branch_coverage=1 00:04:23.897 --rc genhtml_function_coverage=1 00:04:23.897 --rc genhtml_legend=1 00:04:23.897 --rc geninfo_all_blocks=1 00:04:23.897 --rc geninfo_unexecuted_blocks=1 00:04:23.897 00:04:23.897 ' 00:04:23.897 13:46:10 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:23.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.897 --rc genhtml_branch_coverage=1 00:04:23.897 --rc genhtml_function_coverage=1 00:04:23.897 --rc genhtml_legend=1 00:04:23.897 --rc geninfo_all_blocks=1 00:04:23.897 --rc geninfo_unexecuted_blocks=1 00:04:23.897 00:04:23.897 ' 00:04:23.897 13:46:10 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:23.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.897 --rc genhtml_branch_coverage=1 00:04:23.897 --rc genhtml_function_coverage=1 00:04:23.897 --rc genhtml_legend=1 00:04:23.897 --rc geninfo_all_blocks=1 00:04:23.897 --rc geninfo_unexecuted_blocks=1 00:04:23.897 00:04:23.897 ' 00:04:23.897 13:46:10 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:23.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.897 --rc genhtml_branch_coverage=1 00:04:23.897 --rc genhtml_function_coverage=1 00:04:23.897 --rc genhtml_legend=1 00:04:23.897 --rc geninfo_all_blocks=1 00:04:23.897 --rc geninfo_unexecuted_blocks=1 00:04:23.897 00:04:23.897 ' 00:04:23.897 13:46:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:23.897 13:46:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2173433 00:04:23.897 13:46:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2173433 00:04:23.897 13:46:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.897 13:46:10 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 2173433 ']' 00:04:23.897 13:46:10 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.897 13:46:10 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:23.897 13:46:10 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.897 13:46:10 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:23.897 13:46:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:23.897 [2024-11-06 13:46:10.087631] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:23.897 [2024-11-06 13:46:10.087705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2173433 ] 00:04:24.158 [2024-11-06 13:46:10.178950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.158 [2024-11-06 13:46:10.221960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.732 13:46:10 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:24.732 13:46:10 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:24.732 13:46:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:24.733 13:46:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:24.733 13:46:10 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.733 13:46:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:24.733 { 00:04:24.733 "filename": "/tmp/spdk_mem_dump.txt" 00:04:24.733 } 00:04:24.733 13:46:10 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.733 13:46:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:24.733 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:24.733 1 heaps totaling size 818.000000 MiB 00:04:24.733 size: 818.000000 MiB heap id: 0 00:04:24.733 end heaps---------- 00:04:24.733 9 mempools totaling size 603.782043 MiB 00:04:24.733 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:24.733 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:24.733 size: 100.555481 MiB name: bdev_io_2173433 00:04:24.733 size: 50.003479 MiB name: msgpool_2173433 00:04:24.733 size: 36.509338 MiB name: fsdev_io_2173433 00:04:24.733 size: 21.763794 MiB name: PDU_Pool 00:04:24.733 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:24.733 size: 4.133484 MiB name: evtpool_2173433 00:04:24.733 size: 0.026123 MiB name: Session_Pool 00:04:24.733 end mempools------- 00:04:24.733 6 memzones totaling size 4.142822 MiB 00:04:24.733 size: 1.000366 MiB name: RG_ring_0_2173433 00:04:24.733 size: 1.000366 MiB name: RG_ring_1_2173433 00:04:24.733 size: 1.000366 MiB name: RG_ring_4_2173433 00:04:24.733 size: 1.000366 MiB name: RG_ring_5_2173433 00:04:24.733 size: 0.125366 MiB name: RG_ring_2_2173433 00:04:24.733 size: 0.015991 MiB name: RG_ring_3_2173433 00:04:24.733 end memzones------- 00:04:24.733 13:46:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:24.733 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:24.733 list of free elements. size: 10.852478 MiB 00:04:24.733 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:24.733 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:24.733 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:24.733 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:24.733 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:24.733 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:24.733 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:24.733 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:24.733 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:24.733 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:24.733 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:24.733 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:24.733 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:24.733 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:24.733 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:24.733 list of standard malloc elements. size: 199.218628 MiB 00:04:24.733 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:24.733 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:24.733 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:24.733 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:24.733 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:24.733 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:24.733 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:24.733 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:24.733 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:24.733 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:24.733 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:24.733 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:24.733 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:24.733 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:24.733 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:24.733 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:24.733 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:24.733 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:24.733 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:24.733 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:24.733 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:24.733 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:24.733 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:24.733 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:24.733 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:24.733 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:24.733 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:24.733 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:24.733 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:24.733 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:24.733 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:24.733 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:24.733 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:24.733 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:24.733 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:24.733 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:24.733 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:24.733 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:24.733 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:24.733 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:24.733 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:24.733 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:24.733 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:24.733 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:24.733 list of memzone associated elements. size: 607.928894 MiB 00:04:24.733 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:24.733 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:24.733 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:24.733 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:24.733 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:24.733 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2173433_0 00:04:24.733 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:24.733 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2173433_0 00:04:24.733 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:24.733 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2173433_0 00:04:24.733 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:24.733 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:24.733 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:24.733 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:24.733 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:24.733 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2173433_0 00:04:24.733 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:24.733 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2173433 00:04:24.733 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:24.733 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2173433 00:04:24.733 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:24.733 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:24.733 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:24.733 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:24.733 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:24.733 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:24.733 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:24.733 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:24.733 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:24.733 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2173433 00:04:24.733 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:24.733 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2173433 00:04:24.733 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:24.733 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2173433 00:04:24.733 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:24.733 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2173433 00:04:24.733 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:24.733 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2173433 00:04:24.733 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:24.733 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2173433 00:04:24.733 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:24.733 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:24.733 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:24.733 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:24.733 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:24.733 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:24.733 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:24.733 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2173433 00:04:24.733 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:24.733 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2173433 00:04:24.733 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:24.733 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:24.733 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:24.733 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:24.733 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:24.733 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2173433 00:04:24.733 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:24.733 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:24.734 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:24.734 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2173433 00:04:24.734 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:24.734 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2173433 00:04:24.734 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:24.734 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2173433 00:04:24.734 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:24.734 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:24.734 13:46:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:24.734 13:46:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2173433 00:04:24.734 13:46:11 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 2173433 ']' 00:04:24.734 13:46:11 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 2173433 00:04:24.734 13:46:11 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:24.994 13:46:11 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:24.994 13:46:11 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2173433 00:04:24.994 13:46:11 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:24.994 13:46:11 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:24.994 13:46:11 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2173433' 00:04:24.994 killing process with pid 2173433 00:04:24.994 13:46:11 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 2173433 00:04:24.994 13:46:11 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 2173433 00:04:24.994 00:04:24.994 real 0m1.427s 00:04:24.994 user 0m1.502s 00:04:24.994 sys 0m0.438s 00:04:24.994 13:46:11 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:24.994 13:46:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:24.994 ************************************ 00:04:24.994 END TEST dpdk_mem_utility 00:04:24.994 ************************************ 00:04:25.254 13:46:11 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:25.254 13:46:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:25.254 13:46:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:25.254 13:46:11 -- common/autotest_common.sh@10 -- # set +x 00:04:25.254 ************************************ 00:04:25.254 START TEST event 00:04:25.254 ************************************ 00:04:25.254 13:46:11 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:25.254 * Looking for test storage... 00:04:25.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:25.254 13:46:11 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:25.254 13:46:11 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:25.254 13:46:11 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:25.254 13:46:11 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:25.254 13:46:11 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.254 13:46:11 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.254 13:46:11 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.254 13:46:11 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.254 13:46:11 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.254 13:46:11 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.254 13:46:11 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.254 13:46:11 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.254 13:46:11 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.254 13:46:11 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.254 13:46:11 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.254 13:46:11 event -- scripts/common.sh@344 -- # case "$op" in 00:04:25.254 13:46:11 event -- scripts/common.sh@345 -- # : 1 00:04:25.254 13:46:11 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.254 13:46:11 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.254 13:46:11 event -- scripts/common.sh@365 -- # decimal 1 00:04:25.254 13:46:11 event -- scripts/common.sh@353 -- # local d=1 00:04:25.254 13:46:11 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.254 13:46:11 event -- scripts/common.sh@355 -- # echo 1 00:04:25.254 13:46:11 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.254 13:46:11 event -- scripts/common.sh@366 -- # decimal 2 00:04:25.254 13:46:11 event -- scripts/common.sh@353 -- # local d=2 00:04:25.254 13:46:11 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.254 13:46:11 event -- scripts/common.sh@355 -- # echo 2 00:04:25.254 13:46:11 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.254 13:46:11 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.254 13:46:11 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.254 13:46:11 event -- scripts/common.sh@368 -- # return 0 00:04:25.254 13:46:11 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.254 13:46:11 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:25.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.254 --rc genhtml_branch_coverage=1 00:04:25.254 --rc genhtml_function_coverage=1 00:04:25.254 --rc genhtml_legend=1 00:04:25.254 --rc geninfo_all_blocks=1 00:04:25.254 --rc geninfo_unexecuted_blocks=1 00:04:25.254 00:04:25.254 ' 00:04:25.254 13:46:11 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:25.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.254 --rc genhtml_branch_coverage=1 00:04:25.254 --rc genhtml_function_coverage=1 00:04:25.254 --rc genhtml_legend=1 00:04:25.254 --rc geninfo_all_blocks=1 00:04:25.254 --rc geninfo_unexecuted_blocks=1 00:04:25.254 00:04:25.254 ' 00:04:25.254 13:46:11 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:25.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.254 --rc genhtml_branch_coverage=1 00:04:25.254 --rc genhtml_function_coverage=1 00:04:25.254 --rc genhtml_legend=1 00:04:25.254 --rc geninfo_all_blocks=1 00:04:25.254 --rc geninfo_unexecuted_blocks=1 00:04:25.254 00:04:25.254 ' 00:04:25.254 13:46:11 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:25.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.254 --rc genhtml_branch_coverage=1 00:04:25.254 --rc genhtml_function_coverage=1 00:04:25.254 --rc genhtml_legend=1 00:04:25.254 --rc geninfo_all_blocks=1 00:04:25.254 --rc geninfo_unexecuted_blocks=1 00:04:25.254 00:04:25.254 ' 00:04:25.254 13:46:11 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:25.254 13:46:11 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:25.254 13:46:11 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:25.254 13:46:11 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:25.254 13:46:11 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:25.254 13:46:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:25.514 ************************************ 00:04:25.514 START TEST event_perf 00:04:25.514 ************************************ 00:04:25.514 13:46:11 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:25.514 Running I/O for 1 seconds...[2024-11-06 13:46:11.586364] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:25.514 [2024-11-06 13:46:11.586465] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2173824 ] 00:04:25.514 [2024-11-06 13:46:11.673669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:25.514 [2024-11-06 13:46:11.708457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.514 [2024-11-06 13:46:11.708609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:25.514 [2024-11-06 13:46:11.709022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:25.514 [2024-11-06 13:46:11.709092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.453 Running I/O for 1 seconds... 00:04:26.453 lcore 0: 185371 00:04:26.453 lcore 1: 185373 00:04:26.453 lcore 2: 185375 00:04:26.453 lcore 3: 185376 00:04:26.712 done. 00:04:26.712 00:04:26.712 real 0m1.171s 00:04:26.713 user 0m4.083s 00:04:26.713 sys 0m0.085s 00:04:26.713 13:46:12 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.713 13:46:12 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:26.713 ************************************ 00:04:26.713 END TEST event_perf 00:04:26.713 ************************************ 00:04:26.713 13:46:12 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:26.713 13:46:12 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:26.713 13:46:12 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.713 13:46:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:26.713 ************************************ 00:04:26.713 START TEST event_reactor 00:04:26.713 ************************************ 00:04:26.713 13:46:12 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:26.713 [2024-11-06 13:46:12.832886] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:26.713 [2024-11-06 13:46:12.832989] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174186 ] 00:04:26.713 [2024-11-06 13:46:12.920934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.713 [2024-11-06 13:46:12.951703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.097 test_start 00:04:28.097 oneshot 00:04:28.097 tick 100 00:04:28.097 tick 100 00:04:28.097 tick 250 00:04:28.097 tick 100 00:04:28.097 tick 100 00:04:28.097 tick 100 00:04:28.097 tick 250 00:04:28.097 tick 500 00:04:28.097 tick 100 00:04:28.097 tick 100 00:04:28.097 tick 250 00:04:28.097 tick 100 00:04:28.097 tick 100 00:04:28.097 test_end 00:04:28.097 00:04:28.097 real 0m1.166s 00:04:28.097 user 0m1.086s 00:04:28.097 sys 0m0.076s 00:04:28.097 13:46:13 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.097 13:46:13 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:28.097 ************************************ 00:04:28.097 END TEST event_reactor 00:04:28.097 ************************************ 00:04:28.097 13:46:14 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:28.097 13:46:14 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:28.097 13:46:14 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:28.097 13:46:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.097 ************************************ 00:04:28.097 START TEST event_reactor_perf 00:04:28.097 ************************************ 00:04:28.097 13:46:14 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:28.097 [2024-11-06 13:46:14.078716] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:28.097 [2024-11-06 13:46:14.078833] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174511 ] 00:04:28.097 [2024-11-06 13:46:14.167310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.097 [2024-11-06 13:46:14.205714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.040 test_start 00:04:29.040 test_end 00:04:29.040 Performance: 536828 events per second 00:04:29.040 00:04:29.040 real 0m1.174s 00:04:29.040 user 0m1.088s 00:04:29.040 sys 0m0.083s 00:04:29.040 13:46:15 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:29.040 13:46:15 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:29.040 ************************************ 00:04:29.040 END TEST event_reactor_perf 00:04:29.040 ************************************ 00:04:29.040 13:46:15 event -- event/event.sh@49 -- # uname -s 00:04:29.040 13:46:15 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:29.040 13:46:15 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:29.040 13:46:15 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:29.040 13:46:15 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:29.040 13:46:15 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.040 ************************************ 00:04:29.040 START TEST event_scheduler 00:04:29.040 ************************************ 00:04:29.040 13:46:15 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:29.300 * Looking for test storage... 00:04:29.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:29.300 13:46:15 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:29.300 13:46:15 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:29.300 13:46:15 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:29.300 13:46:15 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.300 13:46:15 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:29.300 13:46:15 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.300 13:46:15 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:29.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.300 --rc genhtml_branch_coverage=1 00:04:29.300 --rc genhtml_function_coverage=1 00:04:29.300 --rc genhtml_legend=1 00:04:29.300 --rc geninfo_all_blocks=1 00:04:29.300 --rc geninfo_unexecuted_blocks=1 00:04:29.300 00:04:29.300 ' 00:04:29.300 13:46:15 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:29.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.300 --rc genhtml_branch_coverage=1 00:04:29.300 --rc genhtml_function_coverage=1 00:04:29.300 --rc genhtml_legend=1 00:04:29.300 --rc geninfo_all_blocks=1 00:04:29.300 --rc geninfo_unexecuted_blocks=1 00:04:29.300 00:04:29.300 ' 00:04:29.300 13:46:15 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:29.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.300 --rc genhtml_branch_coverage=1 00:04:29.300 --rc genhtml_function_coverage=1 00:04:29.300 --rc genhtml_legend=1 00:04:29.300 --rc geninfo_all_blocks=1 00:04:29.300 --rc geninfo_unexecuted_blocks=1 00:04:29.300 00:04:29.300 ' 00:04:29.300 13:46:15 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:29.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.300 --rc genhtml_branch_coverage=1 00:04:29.300 --rc genhtml_function_coverage=1 00:04:29.300 --rc genhtml_legend=1 00:04:29.300 --rc geninfo_all_blocks=1 00:04:29.300 --rc geninfo_unexecuted_blocks=1 00:04:29.300 00:04:29.300 ' 00:04:29.300 13:46:15 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:29.300 13:46:15 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2174767 00:04:29.300 13:46:15 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.300 13:46:15 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:29.300 13:46:15 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2174767 00:04:29.300 13:46:15 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 2174767 ']' 00:04:29.300 13:46:15 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.300 13:46:15 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:29.300 13:46:15 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.300 13:46:15 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:29.300 13:46:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.560 [2024-11-06 13:46:15.578433] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:29.560 [2024-11-06 13:46:15.578507] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174767 ] 00:04:29.560 [2024-11-06 13:46:15.671727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:29.560 [2024-11-06 13:46:15.727771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.560 [2024-11-06 13:46:15.727913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.560 [2024-11-06 13:46:15.728110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:29.560 [2024-11-06 13:46:15.728111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:30.130 13:46:16 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:30.130 13:46:16 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:30.130 13:46:16 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:30.130 13:46:16 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.130 13:46:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:30.130 [2024-11-06 13:46:16.390579] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:30.130 [2024-11-06 13:46:16.390599] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:30.130 [2024-11-06 13:46:16.390610] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:30.130 [2024-11-06 13:46:16.390616] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:30.130 [2024-11-06 13:46:16.390622] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:30.130 13:46:16 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.130 13:46:16 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:30.130 13:46:16 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.130 13:46:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:30.390 [2024-11-06 13:46:16.453798] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:30.390 13:46:16 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.390 13:46:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:30.390 13:46:16 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:30.390 13:46:16 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:30.390 13:46:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:30.390 ************************************ 00:04:30.390 START TEST scheduler_create_thread 00:04:30.390 ************************************ 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.390 2 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.390 3 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.390 4 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.390 5 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.390 6 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.390 7 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.390 8 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.390 9 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.390 13:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.962 10 00:04:30.962 13:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.962 13:46:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:30.962 13:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.962 13:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.344 13:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.344 13:46:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:32.344 13:46:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:32.344 13:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.344 13:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.914 13:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.174 13:46:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:33.174 13:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.174 13:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.744 13:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.744 13:46:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:33.744 13:46:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:33.744 13:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.744 13:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.684 13:46:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.684 00:04:34.684 real 0m4.225s 00:04:34.684 user 0m0.020s 00:04:34.684 sys 0m0.006s 00:04:34.684 13:46:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:34.684 13:46:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.684 ************************************ 00:04:34.684 END TEST scheduler_create_thread 00:04:34.684 ************************************ 00:04:34.684 13:46:20 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:34.684 13:46:20 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2174767 00:04:34.684 13:46:20 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 2174767 ']' 00:04:34.684 13:46:20 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 2174767 00:04:34.684 13:46:20 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:34.684 13:46:20 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:34.684 13:46:20 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2174767 00:04:34.684 13:46:20 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:34.684 13:46:20 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:34.684 13:46:20 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2174767' 00:04:34.684 killing process with pid 2174767 00:04:34.684 13:46:20 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 2174767 00:04:34.684 13:46:20 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 2174767 00:04:34.943 [2024-11-06 13:46:20.995568] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:34.943 00:04:34.943 real 0m5.838s 00:04:34.943 user 0m12.862s 00:04:34.943 sys 0m0.427s 00:04:34.943 13:46:21 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:34.943 13:46:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:34.943 ************************************ 00:04:34.943 END TEST event_scheduler 00:04:34.943 ************************************ 00:04:34.943 13:46:21 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:34.943 13:46:21 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:34.943 13:46:21 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:34.943 13:46:21 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:34.943 13:46:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.204 ************************************ 00:04:35.204 START TEST app_repeat 00:04:35.204 ************************************ 00:04:35.204 13:46:21 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:35.204 13:46:21 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.204 13:46:21 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.204 13:46:21 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:35.204 13:46:21 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.204 13:46:21 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:35.204 13:46:21 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:35.204 13:46:21 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:35.204 13:46:21 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2175992 00:04:35.204 13:46:21 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.204 13:46:21 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:35.204 13:46:21 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2175992' 00:04:35.204 Process app_repeat pid: 2175992 00:04:35.204 13:46:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:35.204 13:46:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:35.204 spdk_app_start Round 0 00:04:35.204 13:46:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2175992 /var/tmp/spdk-nbd.sock 00:04:35.204 13:46:21 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2175992 ']' 00:04:35.204 13:46:21 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:35.204 13:46:21 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:35.204 13:46:21 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:35.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:35.204 13:46:21 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:35.204 13:46:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:35.204 [2024-11-06 13:46:21.268205] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:35.204 [2024-11-06 13:46:21.268272] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2175992 ] 00:04:35.204 [2024-11-06 13:46:21.353000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.204 [2024-11-06 13:46:21.384623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.204 [2024-11-06 13:46:21.384623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.204 13:46:21 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:35.204 13:46:21 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:35.204 13:46:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.464 Malloc0 00:04:35.464 13:46:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.724 Malloc1 00:04:35.725 13:46:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.725 13:46:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.725 13:46:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.725 13:46:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:35.725 13:46:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.725 13:46:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:35.725 13:46:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.725 13:46:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.725 13:46:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.725 13:46:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:35.725 13:46:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.725 13:46:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:35.725 13:46:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:35.725 13:46:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:35.725 13:46:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.725 13:46:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:35.985 /dev/nbd0 00:04:35.985 13:46:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:35.985 13:46:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:35.985 13:46:22 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:35.985 13:46:22 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:35.985 13:46:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:35.985 13:46:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:35.985 13:46:22 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:35.985 13:46:22 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:35.985 13:46:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:35.985 13:46:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:35.985 13:46:22 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:35.985 1+0 records in 00:04:35.985 1+0 records out 00:04:35.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361668 s, 11.3 MB/s 00:04:35.985 13:46:22 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.985 13:46:22 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:35.985 13:46:22 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.985 13:46:22 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:35.985 13:46:22 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:35.985 13:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:35.985 13:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.985 13:46:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:36.244 /dev/nbd1 00:04:36.244 13:46:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:36.244 13:46:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:36.244 13:46:22 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:36.244 13:46:22 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:36.244 13:46:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:36.244 13:46:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:36.244 13:46:22 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:36.244 13:46:22 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:36.244 13:46:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:36.244 13:46:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:36.244 13:46:22 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.244 1+0 records in 00:04:36.244 1+0 records out 00:04:36.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274292 s, 14.9 MB/s 00:04:36.244 13:46:22 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.244 13:46:22 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:36.244 13:46:22 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.245 13:46:22 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:36.245 13:46:22 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:36.245 13:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.245 13:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.245 13:46:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.245 13:46:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.245 13:46:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.245 13:46:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:36.245 { 00:04:36.245 "nbd_device": "/dev/nbd0", 00:04:36.245 "bdev_name": "Malloc0" 00:04:36.245 }, 00:04:36.245 { 00:04:36.245 "nbd_device": "/dev/nbd1", 00:04:36.245 "bdev_name": "Malloc1" 00:04:36.245 } 00:04:36.245 ]' 00:04:36.245 13:46:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:36.245 { 00:04:36.245 "nbd_device": "/dev/nbd0", 00:04:36.245 "bdev_name": "Malloc0" 00:04:36.245 }, 00:04:36.245 { 00:04:36.245 "nbd_device": "/dev/nbd1", 00:04:36.245 "bdev_name": "Malloc1" 00:04:36.245 } 00:04:36.245 ]' 00:04:36.245 13:46:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:36.505 /dev/nbd1' 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:36.505 /dev/nbd1' 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:36.505 256+0 records in 00:04:36.505 256+0 records out 00:04:36.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127315 s, 82.4 MB/s 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:36.505 256+0 records in 00:04:36.505 256+0 records out 00:04:36.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121151 s, 86.6 MB/s 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:36.505 256+0 records in 00:04:36.505 256+0 records out 00:04:36.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139131 s, 75.4 MB/s 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:36.505 13:46:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.766 13:46:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:37.026 13:46:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:37.026 13:46:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:37.026 13:46:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:37.026 13:46:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:37.026 13:46:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:37.026 13:46:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:37.026 13:46:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:37.026 13:46:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:37.026 13:46:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:37.026 13:46:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:37.026 13:46:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:37.026 13:46:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:37.026 13:46:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:37.287 13:46:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:37.287 [2024-11-06 13:46:23.483309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.287 [2024-11-06 13:46:23.512786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.287 [2024-11-06 13:46:23.512786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.287 [2024-11-06 13:46:23.541841] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:37.287 [2024-11-06 13:46:23.541871] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:40.732 13:46:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:40.732 13:46:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:40.732 spdk_app_start Round 1 00:04:40.732 13:46:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2175992 /var/tmp/spdk-nbd.sock 00:04:40.732 13:46:26 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2175992 ']' 00:04:40.732 13:46:26 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:40.732 13:46:26 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:40.732 13:46:26 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:40.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:40.732 13:46:26 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:40.732 13:46:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:40.732 13:46:26 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:40.732 13:46:26 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:40.732 13:46:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.732 Malloc0 00:04:40.732 13:46:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.732 Malloc1 00:04:40.732 13:46:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.732 13:46:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.732 13:46:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.732 13:46:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:40.732 13:46:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.732 13:46:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:40.732 13:46:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.732 13:46:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.732 13:46:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.732 13:46:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:40.732 13:46:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.732 13:46:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:40.732 13:46:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:40.732 13:46:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:40.732 13:46:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.732 13:46:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:40.993 /dev/nbd0 00:04:40.993 13:46:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:40.993 13:46:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:40.993 13:46:27 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:40.993 13:46:27 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:40.993 13:46:27 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:40.993 13:46:27 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:40.993 13:46:27 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:40.993 13:46:27 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:40.993 13:46:27 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:40.993 13:46:27 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:40.993 13:46:27 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.993 1+0 records in 00:04:40.993 1+0 records out 00:04:40.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027553 s, 14.9 MB/s 00:04:40.993 13:46:27 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.993 13:46:27 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:40.993 13:46:27 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.993 13:46:27 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:40.993 13:46:27 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:40.993 13:46:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.994 13:46:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.994 13:46:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:41.254 /dev/nbd1 00:04:41.254 13:46:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:41.254 13:46:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:41.254 13:46:27 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:41.254 13:46:27 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:41.254 13:46:27 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:41.254 13:46:27 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:41.254 13:46:27 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:41.254 13:46:27 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:41.254 13:46:27 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:41.254 13:46:27 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:41.254 13:46:27 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:41.254 1+0 records in 00:04:41.254 1+0 records out 00:04:41.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276943 s, 14.8 MB/s 00:04:41.254 13:46:27 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.254 13:46:27 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:41.254 13:46:27 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.254 13:46:27 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:41.254 13:46:27 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:41.254 13:46:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:41.254 13:46:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.254 13:46:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.254 13:46:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.254 13:46:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.514 13:46:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:41.514 { 00:04:41.514 "nbd_device": "/dev/nbd0", 00:04:41.515 "bdev_name": "Malloc0" 00:04:41.515 }, 00:04:41.515 { 00:04:41.515 "nbd_device": "/dev/nbd1", 00:04:41.515 "bdev_name": "Malloc1" 00:04:41.515 } 00:04:41.515 ]' 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:41.515 { 00:04:41.515 "nbd_device": "/dev/nbd0", 00:04:41.515 "bdev_name": "Malloc0" 00:04:41.515 }, 00:04:41.515 { 00:04:41.515 "nbd_device": "/dev/nbd1", 00:04:41.515 "bdev_name": "Malloc1" 00:04:41.515 } 00:04:41.515 ]' 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:41.515 /dev/nbd1' 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:41.515 /dev/nbd1' 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:41.515 256+0 records in 00:04:41.515 256+0 records out 00:04:41.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127574 s, 82.2 MB/s 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:41.515 256+0 records in 00:04:41.515 256+0 records out 00:04:41.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120289 s, 87.2 MB/s 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:41.515 256+0 records in 00:04:41.515 256+0 records out 00:04:41.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132063 s, 79.4 MB/s 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.515 13:46:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:41.776 13:46:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:41.776 13:46:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:41.776 13:46:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:41.776 13:46:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.776 13:46:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.776 13:46:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:41.776 13:46:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.776 13:46:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.776 13:46:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.776 13:46:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:42.037 13:46:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:42.037 13:46:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:42.037 13:46:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:42.037 13:46:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.037 13:46:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.037 13:46:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:42.037 13:46:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.037 13:46:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.037 13:46:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.037 13:46:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.037 13:46:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.037 13:46:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:42.037 13:46:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:42.037 13:46:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.297 13:46:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:42.297 13:46:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:42.297 13:46:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.297 13:46:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:42.297 13:46:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:42.297 13:46:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:42.297 13:46:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:42.297 13:46:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:42.297 13:46:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:42.297 13:46:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:42.297 13:46:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:42.557 [2024-11-06 13:46:28.595531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.557 [2024-11-06 13:46:28.625210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.557 [2024-11-06 13:46:28.625211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.557 [2024-11-06 13:46:28.654856] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:42.557 [2024-11-06 13:46:28.654887] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:45.853 13:46:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:45.853 13:46:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:45.853 spdk_app_start Round 2 00:04:45.853 13:46:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2175992 /var/tmp/spdk-nbd.sock 00:04:45.853 13:46:31 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2175992 ']' 00:04:45.853 13:46:31 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:45.853 13:46:31 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:45.853 13:46:31 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:45.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:45.853 13:46:31 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:45.853 13:46:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:45.853 13:46:31 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:45.853 13:46:31 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:45.853 13:46:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.853 Malloc0 00:04:45.853 13:46:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.853 Malloc1 00:04:45.853 13:46:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.853 13:46:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.853 13:46:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.853 13:46:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:45.853 13:46:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.853 13:46:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:45.853 13:46:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.853 13:46:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.853 13:46:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.853 13:46:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:45.853 13:46:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.853 13:46:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:45.853 13:46:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:45.853 13:46:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:45.853 13:46:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.853 13:46:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:46.113 /dev/nbd0 00:04:46.113 13:46:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:46.113 13:46:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:46.113 13:46:32 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:46.113 13:46:32 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:46.113 13:46:32 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:46.113 13:46:32 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:46.113 13:46:32 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:46.113 13:46:32 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:46.113 13:46:32 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:46.113 13:46:32 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:46.113 13:46:32 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.113 1+0 records in 00:04:46.113 1+0 records out 00:04:46.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305146 s, 13.4 MB/s 00:04:46.113 13:46:32 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.113 13:46:32 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:46.113 13:46:32 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.113 13:46:32 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:46.113 13:46:32 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:46.113 13:46:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.113 13:46:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.113 13:46:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:46.374 /dev/nbd1 00:04:46.374 13:46:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:46.374 13:46:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:46.374 13:46:32 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:46.374 13:46:32 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:46.374 13:46:32 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:46.374 13:46:32 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:46.374 13:46:32 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:46.374 13:46:32 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:46.374 13:46:32 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:46.374 13:46:32 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:46.374 13:46:32 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.374 1+0 records in 00:04:46.374 1+0 records out 00:04:46.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254852 s, 16.1 MB/s 00:04:46.374 13:46:32 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.374 13:46:32 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:46.374 13:46:32 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.374 13:46:32 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:46.374 13:46:32 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:46.374 13:46:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.374 13:46:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.374 13:46:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.374 13:46:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.374 13:46:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:46.635 { 00:04:46.635 "nbd_device": "/dev/nbd0", 00:04:46.635 "bdev_name": "Malloc0" 00:04:46.635 }, 00:04:46.635 { 00:04:46.635 "nbd_device": "/dev/nbd1", 00:04:46.635 "bdev_name": "Malloc1" 00:04:46.635 } 00:04:46.635 ]' 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:46.635 { 00:04:46.635 "nbd_device": "/dev/nbd0", 00:04:46.635 "bdev_name": "Malloc0" 00:04:46.635 }, 00:04:46.635 { 00:04:46.635 "nbd_device": "/dev/nbd1", 00:04:46.635 "bdev_name": "Malloc1" 00:04:46.635 } 00:04:46.635 ]' 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:46.635 /dev/nbd1' 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:46.635 /dev/nbd1' 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:46.635 256+0 records in 00:04:46.635 256+0 records out 00:04:46.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012747 s, 82.3 MB/s 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:46.635 256+0 records in 00:04:46.635 256+0 records out 00:04:46.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126119 s, 83.1 MB/s 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:46.635 256+0 records in 00:04:46.635 256+0 records out 00:04:46.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132161 s, 79.3 MB/s 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.635 13:46:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:46.896 13:46:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:46.896 13:46:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:46.896 13:46:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:46.896 13:46:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.896 13:46:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.896 13:46:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:46.896 13:46:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.896 13:46:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.896 13:46:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.896 13:46:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:47.156 13:46:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:47.156 13:46:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:47.156 13:46:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:47.156 13:46:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.156 13:46:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.156 13:46:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:47.156 13:46:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.156 13:46:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.156 13:46:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.156 13:46:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.156 13:46:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.156 13:46:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:47.156 13:46:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:47.156 13:46:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.416 13:46:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:47.416 13:46:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:47.416 13:46:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.416 13:46:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:47.416 13:46:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:47.416 13:46:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:47.416 13:46:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:47.416 13:46:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:47.416 13:46:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:47.416 13:46:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:47.416 13:46:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:47.676 [2024-11-06 13:46:33.736368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.676 [2024-11-06 13:46:33.765924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.676 [2024-11-06 13:46:33.766012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.676 [2024-11-06 13:46:33.795070] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:47.676 [2024-11-06 13:46:33.795106] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.971 13:46:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2175992 /var/tmp/spdk-nbd.sock 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2175992 ']' 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:50.971 13:46:36 event.app_repeat -- event/event.sh@39 -- # killprocess 2175992 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 2175992 ']' 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 2175992 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2175992 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2175992' 00:04:50.971 killing process with pid 2175992 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@971 -- # kill 2175992 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@976 -- # wait 2175992 00:04:50.971 spdk_app_start is called in Round 0. 00:04:50.971 Shutdown signal received, stop current app iteration 00:04:50.971 Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 reinitialization... 00:04:50.971 spdk_app_start is called in Round 1. 00:04:50.971 Shutdown signal received, stop current app iteration 00:04:50.971 Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 reinitialization... 00:04:50.971 spdk_app_start is called in Round 2. 00:04:50.971 Shutdown signal received, stop current app iteration 00:04:50.971 Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 reinitialization... 00:04:50.971 spdk_app_start is called in Round 3. 00:04:50.971 Shutdown signal received, stop current app iteration 00:04:50.971 13:46:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:50.971 13:46:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:50.971 00:04:50.971 real 0m15.764s 00:04:50.971 user 0m34.599s 00:04:50.971 sys 0m2.272s 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:50.971 13:46:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.971 ************************************ 00:04:50.971 END TEST app_repeat 00:04:50.971 ************************************ 00:04:50.971 13:46:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:50.971 13:46:37 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:50.971 13:46:37 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:50.971 13:46:37 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:50.971 13:46:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.971 ************************************ 00:04:50.971 START TEST cpu_locks 00:04:50.971 ************************************ 00:04:50.971 13:46:37 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:50.971 * Looking for test storage... 00:04:50.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:50.971 13:46:37 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:50.971 13:46:37 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:50.971 13:46:37 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:51.232 13:46:37 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.232 13:46:37 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:51.232 13:46:37 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.232 13:46:37 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:51.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.233 --rc genhtml_branch_coverage=1 00:04:51.233 --rc genhtml_function_coverage=1 00:04:51.233 --rc genhtml_legend=1 00:04:51.233 --rc geninfo_all_blocks=1 00:04:51.233 --rc geninfo_unexecuted_blocks=1 00:04:51.233 00:04:51.233 ' 00:04:51.233 13:46:37 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:51.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.233 --rc genhtml_branch_coverage=1 00:04:51.233 --rc genhtml_function_coverage=1 00:04:51.233 --rc genhtml_legend=1 00:04:51.233 --rc geninfo_all_blocks=1 00:04:51.233 --rc geninfo_unexecuted_blocks=1 00:04:51.233 00:04:51.233 ' 00:04:51.233 13:46:37 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:51.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.233 --rc genhtml_branch_coverage=1 00:04:51.233 --rc genhtml_function_coverage=1 00:04:51.233 --rc genhtml_legend=1 00:04:51.233 --rc geninfo_all_blocks=1 00:04:51.233 --rc geninfo_unexecuted_blocks=1 00:04:51.233 00:04:51.233 ' 00:04:51.233 13:46:37 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:51.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.233 --rc genhtml_branch_coverage=1 00:04:51.233 --rc genhtml_function_coverage=1 00:04:51.233 --rc genhtml_legend=1 00:04:51.233 --rc geninfo_all_blocks=1 00:04:51.233 --rc geninfo_unexecuted_blocks=1 00:04:51.233 00:04:51.233 ' 00:04:51.233 13:46:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:51.233 13:46:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:51.233 13:46:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:51.233 13:46:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:51.233 13:46:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:51.233 13:46:37 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:51.233 13:46:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.233 ************************************ 00:04:51.233 START TEST default_locks 00:04:51.233 ************************************ 00:04:51.233 13:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:04:51.233 13:46:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2179263 00:04:51.233 13:46:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2179263 00:04:51.233 13:46:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.233 13:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2179263 ']' 00:04:51.233 13:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.233 13:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:51.233 13:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.233 13:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:51.233 13:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.233 [2024-11-06 13:46:37.371459] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:51.233 [2024-11-06 13:46:37.371509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179263 ] 00:04:51.233 [2024-11-06 13:46:37.457703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.233 [2024-11-06 13:46:37.490131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.174 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:52.174 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:04:52.174 13:46:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2179263 00:04:52.174 13:46:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2179263 00:04:52.174 13:46:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.174 lslocks: write error 00:04:52.174 13:46:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2179263 00:04:52.174 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 2179263 ']' 00:04:52.174 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 2179263 00:04:52.174 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:04:52.174 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:52.174 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2179263 00:04:52.174 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:52.174 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:52.174 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2179263' 00:04:52.174 killing process with pid 2179263 00:04:52.174 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 2179263 00:04:52.174 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 2179263 00:04:52.434 13:46:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2179263 00:04:52.434 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:52.434 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2179263 00:04:52.434 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:52.434 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.434 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:52.434 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.434 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2179263 00:04:52.434 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2179263 ']' 00:04:52.434 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.435 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:52.435 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.435 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:52.435 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2179263) - No such process 00:04:52.435 ERROR: process (pid: 2179263) is no longer running 00:04:52.435 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:52.435 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:04:52.435 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:52.435 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:52.435 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:52.435 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:52.435 13:46:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:52.435 13:46:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:52.435 13:46:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:52.435 13:46:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:52.435 00:04:52.435 real 0m1.287s 00:04:52.435 user 0m1.394s 00:04:52.435 sys 0m0.422s 00:04:52.435 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:52.435 13:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.435 ************************************ 00:04:52.435 END TEST default_locks 00:04:52.435 ************************************ 00:04:52.435 13:46:38 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:52.435 13:46:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:52.435 13:46:38 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:52.435 13:46:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.435 ************************************ 00:04:52.435 START TEST default_locks_via_rpc 00:04:52.435 ************************************ 00:04:52.435 13:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:04:52.435 13:46:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2179628 00:04:52.435 13:46:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2179628 00:04:52.435 13:46:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.435 13:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2179628 ']' 00:04:52.435 13:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.435 13:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:52.435 13:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.435 13:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:52.435 13:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.696 [2024-11-06 13:46:38.735651] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:52.696 [2024-11-06 13:46:38.735706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179628 ] 00:04:52.696 [2024-11-06 13:46:38.820038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.696 [2024-11-06 13:46:38.851124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.266 13:46:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:53.267 13:46:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:53.267 13:46:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:53.267 13:46:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.267 13:46:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.267 13:46:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.267 13:46:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:53.267 13:46:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:53.267 13:46:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:53.267 13:46:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:53.267 13:46:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:53.267 13:46:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.267 13:46:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.267 13:46:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.267 13:46:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2179628 00:04:53.267 13:46:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2179628 00:04:53.267 13:46:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:53.838 13:46:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2179628 00:04:53.838 13:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 2179628 ']' 00:04:53.838 13:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 2179628 00:04:53.838 13:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:04:53.838 13:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:53.838 13:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2179628 00:04:53.838 13:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:53.838 13:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:53.838 13:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2179628' 00:04:53.838 killing process with pid 2179628 00:04:53.838 13:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 2179628 00:04:53.838 13:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 2179628 00:04:54.098 00:04:54.098 real 0m1.574s 00:04:54.098 user 0m1.702s 00:04:54.098 sys 0m0.537s 00:04:54.098 13:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:54.098 13:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.098 ************************************ 00:04:54.098 END TEST default_locks_via_rpc 00:04:54.098 ************************************ 00:04:54.098 13:46:40 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:54.098 13:46:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:54.098 13:46:40 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:54.098 13:46:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.098 ************************************ 00:04:54.098 START TEST non_locking_app_on_locked_coremask 00:04:54.098 ************************************ 00:04:54.098 13:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:04:54.098 13:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2179996 00:04:54.098 13:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2179996 /var/tmp/spdk.sock 00:04:54.098 13:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.098 13:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2179996 ']' 00:04:54.098 13:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.098 13:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:54.098 13:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.098 13:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:54.098 13:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.359 [2024-11-06 13:46:40.385537] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:54.359 [2024-11-06 13:46:40.385587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179996 ] 00:04:54.359 [2024-11-06 13:46:40.469796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.359 [2024-11-06 13:46:40.500134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.928 13:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:54.928 13:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:54.928 13:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:54.928 13:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2180168 00:04:54.928 13:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2180168 /var/tmp/spdk2.sock 00:04:54.928 13:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2180168 ']' 00:04:54.928 13:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:54.928 13:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:54.928 13:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:54.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:54.928 13:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:54.928 13:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.928 [2024-11-06 13:46:41.201430] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:54.928 [2024-11-06 13:46:41.201482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180168 ] 00:04:55.189 [2024-11-06 13:46:41.285595] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:55.189 [2024-11-06 13:46:41.285619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.189 [2024-11-06 13:46:41.348064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.759 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:55.759 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:55.759 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2179996 00:04:55.759 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2179996 00:04:55.759 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.019 lslocks: write error 00:04:56.019 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2179996 00:04:56.019 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2179996 ']' 00:04:56.019 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2179996 00:04:56.019 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:56.019 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:56.019 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2179996 00:04:56.279 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:56.279 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:56.279 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2179996' 00:04:56.279 killing process with pid 2179996 00:04:56.279 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2179996 00:04:56.279 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2179996 00:04:56.539 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2180168 00:04:56.539 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2180168 ']' 00:04:56.539 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2180168 00:04:56.539 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:56.539 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:56.539 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2180168 00:04:56.539 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:56.539 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:56.539 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2180168' 00:04:56.539 killing process with pid 2180168 00:04:56.539 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2180168 00:04:56.539 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2180168 00:04:56.799 00:04:56.799 real 0m2.611s 00:04:56.799 user 0m2.913s 00:04:56.799 sys 0m0.765s 00:04:56.799 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:56.799 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.799 ************************************ 00:04:56.799 END TEST non_locking_app_on_locked_coremask 00:04:56.799 ************************************ 00:04:56.799 13:46:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:56.799 13:46:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.799 13:46:42 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.799 13:46:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.799 ************************************ 00:04:56.799 START TEST locking_app_on_unlocked_coremask 00:04:56.799 ************************************ 00:04:56.799 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:04:56.799 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2180601 00:04:56.799 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2180601 /var/tmp/spdk.sock 00:04:56.799 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:56.799 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2180601 ']' 00:04:56.799 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.799 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:56.799 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.799 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:56.800 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.059 [2024-11-06 13:46:43.083654] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:57.059 [2024-11-06 13:46:43.083707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180601 ] 00:04:57.059 [2024-11-06 13:46:43.169197] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:57.059 [2024-11-06 13:46:43.169222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.059 [2024-11-06 13:46:43.201327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.629 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:57.629 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:57.629 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2180718 00:04:57.629 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2180718 /var/tmp/spdk2.sock 00:04:57.629 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2180718 ']' 00:04:57.629 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:57.629 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.629 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:57.629 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.629 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:57.629 13:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.889 [2024-11-06 13:46:43.922448] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:57.889 [2024-11-06 13:46:43.922502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180718 ] 00:04:57.889 [2024-11-06 13:46:44.010674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.889 [2024-11-06 13:46:44.069018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.461 13:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:58.461 13:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:58.461 13:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2180718 00:04:58.461 13:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2180718 00:04:58.461 13:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:59.403 lslocks: write error 00:04:59.403 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2180601 00:04:59.403 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2180601 ']' 00:04:59.403 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2180601 00:04:59.403 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:59.403 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:59.403 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2180601 00:04:59.403 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:59.403 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:59.403 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2180601' 00:04:59.403 killing process with pid 2180601 00:04:59.403 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2180601 00:04:59.403 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2180601 00:04:59.664 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2180718 00:04:59.664 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2180718 ']' 00:04:59.664 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2180718 00:04:59.664 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:59.664 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:59.664 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2180718 00:04:59.664 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:59.664 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:59.664 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2180718' 00:04:59.664 killing process with pid 2180718 00:04:59.664 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2180718 00:04:59.664 13:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2180718 00:04:59.925 00:04:59.925 real 0m3.019s 00:04:59.925 user 0m3.379s 00:04:59.925 sys 0m0.927s 00:04:59.925 13:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:59.925 13:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.925 ************************************ 00:04:59.925 END TEST locking_app_on_unlocked_coremask 00:04:59.925 ************************************ 00:04:59.925 13:46:46 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:59.925 13:46:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.925 13:46:46 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.925 13:46:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.925 ************************************ 00:04:59.925 START TEST locking_app_on_locked_coremask 00:04:59.925 ************************************ 00:04:59.925 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:04:59.925 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2181137 00:04:59.925 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2181137 /var/tmp/spdk.sock 00:04:59.925 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.925 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2181137 ']' 00:04:59.925 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.925 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:59.925 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.925 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:59.925 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.925 [2024-11-06 13:46:46.170576] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:04:59.925 [2024-11-06 13:46:46.170629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181137 ] 00:05:00.185 [2024-11-06 13:46:46.257933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.185 [2024-11-06 13:46:46.291048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.754 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:00.755 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:00.755 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2181425 00:05:00.755 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2181425 /var/tmp/spdk2.sock 00:05:00.755 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:00.755 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:00.755 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2181425 /var/tmp/spdk2.sock 00:05:00.755 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:00.755 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.755 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:00.755 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.755 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2181425 /var/tmp/spdk2.sock 00:05:00.755 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2181425 ']' 00:05:00.755 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.755 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:00.755 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.755 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:00.755 13:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.755 [2024-11-06 13:46:47.012282] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:05:00.755 [2024-11-06 13:46:47.012335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181425 ] 00:05:01.014 [2024-11-06 13:46:47.099850] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2181137 has claimed it. 00:05:01.014 [2024-11-06 13:46:47.099880] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:01.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2181425) - No such process 00:05:01.584 ERROR: process (pid: 2181425) is no longer running 00:05:01.584 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:01.584 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:01.584 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:01.584 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:01.584 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:01.584 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:01.584 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2181137 00:05:01.584 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2181137 00:05:01.584 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:01.584 lslocks: write error 00:05:01.584 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2181137 00:05:01.584 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2181137 ']' 00:05:01.584 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2181137 00:05:01.584 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:01.584 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:01.584 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2181137 00:05:01.844 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:01.844 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:01.845 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2181137' 00:05:01.845 killing process with pid 2181137 00:05:01.845 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2181137 00:05:01.845 13:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2181137 00:05:01.845 00:05:01.845 real 0m1.969s 00:05:01.845 user 0m2.240s 00:05:01.845 sys 0m0.523s 00:05:01.845 13:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.845 13:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.845 ************************************ 00:05:01.845 END TEST locking_app_on_locked_coremask 00:05:01.845 ************************************ 00:05:01.845 13:46:48 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:01.845 13:46:48 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:01.845 13:46:48 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.845 13:46:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.105 ************************************ 00:05:02.105 START TEST locking_overlapped_coremask 00:05:02.105 ************************************ 00:05:02.105 13:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:02.105 13:46:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2181679 00:05:02.105 13:46:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2181679 /var/tmp/spdk.sock 00:05:02.105 13:46:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:02.105 13:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2181679 ']' 00:05:02.105 13:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.105 13:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:02.105 13:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.105 13:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:02.105 13:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.105 [2024-11-06 13:46:48.218853] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:05:02.105 [2024-11-06 13:46:48.218913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181679 ] 00:05:02.105 [2024-11-06 13:46:48.307682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:02.105 [2024-11-06 13:46:48.342117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.105 [2024-11-06 13:46:48.342267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.105 [2024-11-06 13:46:48.342268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.046 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:03.046 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:03.046 13:46:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2181806 00:05:03.046 13:46:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2181806 /var/tmp/spdk2.sock 00:05:03.046 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:03.046 13:46:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:03.046 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2181806 /var/tmp/spdk2.sock 00:05:03.046 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:03.046 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:03.046 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:03.046 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:03.046 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2181806 /var/tmp/spdk2.sock 00:05:03.046 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2181806 ']' 00:05:03.046 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.046 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:03.046 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.046 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:03.046 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.046 [2024-11-06 13:46:49.070808] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:05:03.046 [2024-11-06 13:46:49.070862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181806 ] 00:05:03.046 [2024-11-06 13:46:49.183615] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2181679 has claimed it. 00:05:03.046 [2024-11-06 13:46:49.183655] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:03.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2181806) - No such process 00:05:03.616 ERROR: process (pid: 2181806) is no longer running 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2181679 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 2181679 ']' 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 2181679 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2181679 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2181679' 00:05:03.616 killing process with pid 2181679 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 2181679 00:05:03.616 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 2181679 00:05:03.876 00:05:03.876 real 0m1.783s 00:05:03.876 user 0m5.149s 00:05:03.876 sys 0m0.402s 00:05:03.876 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.876 13:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.876 ************************************ 00:05:03.876 END TEST locking_overlapped_coremask 00:05:03.876 ************************************ 00:05:03.876 13:46:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:03.876 13:46:49 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.876 13:46:49 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.876 13:46:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.876 ************************************ 00:05:03.876 START TEST locking_overlapped_coremask_via_rpc 00:05:03.876 ************************************ 00:05:03.876 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:03.876 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2182131 00:05:03.876 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2182131 /var/tmp/spdk.sock 00:05:03.876 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:03.876 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2182131 ']' 00:05:03.876 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.876 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:03.876 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.876 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:03.876 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.876 [2024-11-06 13:46:50.078112] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:05:03.876 [2024-11-06 13:46:50.078174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182131 ] 00:05:04.136 [2024-11-06 13:46:50.163555] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:04.136 [2024-11-06 13:46:50.163584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:04.136 [2024-11-06 13:46:50.196864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.136 [2024-11-06 13:46:50.197110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.136 [2024-11-06 13:46:50.197111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.707 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:04.707 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:04.707 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2182180 00:05:04.707 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2182180 /var/tmp/spdk2.sock 00:05:04.707 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2182180 ']' 00:05:04.707 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:04.707 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.707 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:04.707 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.707 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:04.707 13:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.707 [2024-11-06 13:46:50.932685] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:05:04.707 [2024-11-06 13:46:50.932740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182180 ] 00:05:04.967 [2024-11-06 13:46:51.046520] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:04.967 [2024-11-06 13:46:51.046552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:04.967 [2024-11-06 13:46:51.124248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.967 [2024-11-06 13:46:51.124366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.967 [2024-11-06 13:46:51.124367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.538 [2024-11-06 13:46:51.739834] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2182131 has claimed it. 00:05:05.538 request: 00:05:05.538 { 00:05:05.538 "method": "framework_enable_cpumask_locks", 00:05:05.538 "req_id": 1 00:05:05.538 } 00:05:05.538 Got JSON-RPC error response 00:05:05.538 response: 00:05:05.538 { 00:05:05.538 "code": -32603, 00:05:05.538 "message": "Failed to claim CPU core: 2" 00:05:05.538 } 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2182131 /var/tmp/spdk.sock 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2182131 ']' 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:05.538 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.798 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:05.798 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:05.798 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2182180 /var/tmp/spdk2.sock 00:05:05.798 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2182180 ']' 00:05:05.798 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.798 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:05.798 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.798 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:05.798 13:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.058 13:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:06.058 13:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:06.058 13:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:06.058 13:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:06.058 13:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:06.058 13:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:06.058 00:05:06.058 real 0m2.105s 00:05:06.058 user 0m0.872s 00:05:06.058 sys 0m0.146s 00:05:06.058 13:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.058 13:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.058 ************************************ 00:05:06.058 END TEST locking_overlapped_coremask_via_rpc 00:05:06.058 ************************************ 00:05:06.058 13:46:52 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:06.058 13:46:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2182131 ]] 00:05:06.058 13:46:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2182131 00:05:06.058 13:46:52 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2182131 ']' 00:05:06.058 13:46:52 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2182131 00:05:06.058 13:46:52 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:06.058 13:46:52 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:06.058 13:46:52 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2182131 00:05:06.058 13:46:52 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:06.058 13:46:52 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:06.058 13:46:52 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2182131' 00:05:06.058 killing process with pid 2182131 00:05:06.058 13:46:52 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2182131 00:05:06.058 13:46:52 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2182131 00:05:06.318 13:46:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2182180 ]] 00:05:06.318 13:46:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2182180 00:05:06.318 13:46:52 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2182180 ']' 00:05:06.318 13:46:52 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2182180 00:05:06.318 13:46:52 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:06.318 13:46:52 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:06.318 13:46:52 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2182180 00:05:06.318 13:46:52 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:06.318 13:46:52 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:06.318 13:46:52 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2182180' 00:05:06.318 killing process with pid 2182180 00:05:06.318 13:46:52 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2182180 00:05:06.318 13:46:52 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2182180 00:05:06.578 13:46:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:06.578 13:46:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:06.578 13:46:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2182131 ]] 00:05:06.578 13:46:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2182131 00:05:06.578 13:46:52 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2182131 ']' 00:05:06.578 13:46:52 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2182131 00:05:06.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2182131) - No such process 00:05:06.578 13:46:52 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2182131 is not found' 00:05:06.578 Process with pid 2182131 is not found 00:05:06.578 13:46:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2182180 ]] 00:05:06.578 13:46:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2182180 00:05:06.578 13:46:52 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2182180 ']' 00:05:06.578 13:46:52 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2182180 00:05:06.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2182180) - No such process 00:05:06.578 13:46:52 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2182180 is not found' 00:05:06.578 Process with pid 2182180 is not found 00:05:06.578 13:46:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:06.578 00:05:06.578 real 0m15.664s 00:05:06.578 user 0m27.902s 00:05:06.578 sys 0m4.673s 00:05:06.578 13:46:52 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.578 13:46:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.578 ************************************ 00:05:06.578 END TEST cpu_locks 00:05:06.578 ************************************ 00:05:06.578 00:05:06.578 real 0m41.449s 00:05:06.578 user 1m21.943s 00:05:06.578 sys 0m8.000s 00:05:06.578 13:46:52 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.578 13:46:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.578 ************************************ 00:05:06.578 END TEST event 00:05:06.578 ************************************ 00:05:06.578 13:46:52 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:06.578 13:46:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.578 13:46:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.578 13:46:52 -- common/autotest_common.sh@10 -- # set +x 00:05:06.839 ************************************ 00:05:06.839 START TEST thread 00:05:06.839 ************************************ 00:05:06.839 13:46:52 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:06.839 * Looking for test storage... 00:05:06.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:06.839 13:46:52 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:06.839 13:46:52 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:06.839 13:46:52 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:06.839 13:46:53 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:06.839 13:46:53 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.839 13:46:53 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.839 13:46:53 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.839 13:46:53 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.839 13:46:53 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.839 13:46:53 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.839 13:46:53 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.839 13:46:53 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.839 13:46:53 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.839 13:46:53 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.839 13:46:53 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.839 13:46:53 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:06.839 13:46:53 thread -- scripts/common.sh@345 -- # : 1 00:05:06.839 13:46:53 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.839 13:46:53 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.839 13:46:53 thread -- scripts/common.sh@365 -- # decimal 1 00:05:06.839 13:46:53 thread -- scripts/common.sh@353 -- # local d=1 00:05:06.839 13:46:53 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.839 13:46:53 thread -- scripts/common.sh@355 -- # echo 1 00:05:06.839 13:46:53 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.839 13:46:53 thread -- scripts/common.sh@366 -- # decimal 2 00:05:06.839 13:46:53 thread -- scripts/common.sh@353 -- # local d=2 00:05:06.839 13:46:53 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.839 13:46:53 thread -- scripts/common.sh@355 -- # echo 2 00:05:06.839 13:46:53 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.839 13:46:53 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.839 13:46:53 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.839 13:46:53 thread -- scripts/common.sh@368 -- # return 0 00:05:06.839 13:46:53 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.839 13:46:53 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:06.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.839 --rc genhtml_branch_coverage=1 00:05:06.839 --rc genhtml_function_coverage=1 00:05:06.839 --rc genhtml_legend=1 00:05:06.839 --rc geninfo_all_blocks=1 00:05:06.839 --rc geninfo_unexecuted_blocks=1 00:05:06.839 00:05:06.839 ' 00:05:06.839 13:46:53 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:06.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.839 --rc genhtml_branch_coverage=1 00:05:06.839 --rc genhtml_function_coverage=1 00:05:06.839 --rc genhtml_legend=1 00:05:06.839 --rc geninfo_all_blocks=1 00:05:06.839 --rc geninfo_unexecuted_blocks=1 00:05:06.839 00:05:06.839 ' 00:05:06.839 13:46:53 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:06.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.839 --rc genhtml_branch_coverage=1 00:05:06.839 --rc genhtml_function_coverage=1 00:05:06.839 --rc genhtml_legend=1 00:05:06.839 --rc geninfo_all_blocks=1 00:05:06.839 --rc geninfo_unexecuted_blocks=1 00:05:06.839 00:05:06.839 ' 00:05:06.839 13:46:53 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:06.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.839 --rc genhtml_branch_coverage=1 00:05:06.839 --rc genhtml_function_coverage=1 00:05:06.839 --rc genhtml_legend=1 00:05:06.839 --rc geninfo_all_blocks=1 00:05:06.839 --rc geninfo_unexecuted_blocks=1 00:05:06.839 00:05:06.839 ' 00:05:06.839 13:46:53 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:06.839 13:46:53 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:06.839 13:46:53 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.839 13:46:53 thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.839 ************************************ 00:05:06.839 START TEST thread_poller_perf 00:05:06.839 ************************************ 00:05:06.839 13:46:53 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:07.099 [2024-11-06 13:46:53.122696] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:05:07.099 [2024-11-06 13:46:53.122816] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182742 ] 00:05:07.099 [2024-11-06 13:46:53.213150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.100 [2024-11-06 13:46:53.252515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.100 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:08.040 [2024-11-06T12:46:54.320Z] ====================================== 00:05:08.040 [2024-11-06T12:46:54.320Z] busy:2405119962 (cyc) 00:05:08.040 [2024-11-06T12:46:54.320Z] total_run_count: 419000 00:05:08.040 [2024-11-06T12:46:54.320Z] tsc_hz: 2400000000 (cyc) 00:05:08.040 [2024-11-06T12:46:54.320Z] ====================================== 00:05:08.040 [2024-11-06T12:46:54.320Z] poller_cost: 5740 (cyc), 2391 (nsec) 00:05:08.040 00:05:08.040 real 0m1.184s 00:05:08.040 user 0m1.093s 00:05:08.040 sys 0m0.086s 00:05:08.040 13:46:54 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.040 13:46:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.040 ************************************ 00:05:08.040 END TEST thread_poller_perf 00:05:08.040 ************************************ 00:05:08.300 13:46:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:08.300 13:46:54 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:08.300 13:46:54 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:08.300 13:46:54 thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.300 ************************************ 00:05:08.300 START TEST thread_poller_perf 00:05:08.300 ************************************ 00:05:08.300 13:46:54 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:08.300 [2024-11-06 13:46:54.385209] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:05:08.300 [2024-11-06 13:46:54.385303] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182983 ] 00:05:08.300 [2024-11-06 13:46:54.475342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.300 [2024-11-06 13:46:54.506927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.300 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:09.683 [2024-11-06T12:46:55.963Z] ====================================== 00:05:09.683 [2024-11-06T12:46:55.963Z] busy:2401441390 (cyc) 00:05:09.683 [2024-11-06T12:46:55.963Z] total_run_count: 5562000 00:05:09.683 [2024-11-06T12:46:55.963Z] tsc_hz: 2400000000 (cyc) 00:05:09.683 [2024-11-06T12:46:55.963Z] ====================================== 00:05:09.683 [2024-11-06T12:46:55.963Z] poller_cost: 431 (cyc), 179 (nsec) 00:05:09.683 00:05:09.683 real 0m1.170s 00:05:09.683 user 0m1.087s 00:05:09.683 sys 0m0.079s 00:05:09.683 13:46:55 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:09.683 13:46:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.683 ************************************ 00:05:09.683 END TEST thread_poller_perf 00:05:09.683 ************************************ 00:05:09.683 13:46:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:09.683 00:05:09.683 real 0m2.715s 00:05:09.683 user 0m2.355s 00:05:09.683 sys 0m0.373s 00:05:09.683 13:46:55 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:09.683 13:46:55 thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.683 ************************************ 00:05:09.683 END TEST thread 00:05:09.683 ************************************ 00:05:09.683 13:46:55 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:09.683 13:46:55 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:09.683 13:46:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:09.683 13:46:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:09.683 13:46:55 -- common/autotest_common.sh@10 -- # set +x 00:05:09.683 ************************************ 00:05:09.683 START TEST app_cmdline 00:05:09.683 ************************************ 00:05:09.683 13:46:55 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:09.683 * Looking for test storage... 00:05:09.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:09.683 13:46:55 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:09.683 13:46:55 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:09.683 13:46:55 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:09.683 13:46:55 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.683 13:46:55 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:09.683 13:46:55 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.683 13:46:55 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:09.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.683 --rc genhtml_branch_coverage=1 00:05:09.683 --rc genhtml_function_coverage=1 00:05:09.683 --rc genhtml_legend=1 00:05:09.683 --rc geninfo_all_blocks=1 00:05:09.684 --rc geninfo_unexecuted_blocks=1 00:05:09.684 00:05:09.684 ' 00:05:09.684 13:46:55 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:09.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.684 --rc genhtml_branch_coverage=1 00:05:09.684 --rc genhtml_function_coverage=1 00:05:09.684 --rc genhtml_legend=1 00:05:09.684 --rc geninfo_all_blocks=1 00:05:09.684 --rc geninfo_unexecuted_blocks=1 00:05:09.684 00:05:09.684 ' 00:05:09.684 13:46:55 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:09.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.684 --rc genhtml_branch_coverage=1 00:05:09.684 --rc genhtml_function_coverage=1 00:05:09.684 --rc genhtml_legend=1 00:05:09.684 --rc geninfo_all_blocks=1 00:05:09.684 --rc geninfo_unexecuted_blocks=1 00:05:09.684 00:05:09.684 ' 00:05:09.684 13:46:55 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:09.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.684 --rc genhtml_branch_coverage=1 00:05:09.684 --rc genhtml_function_coverage=1 00:05:09.684 --rc genhtml_legend=1 00:05:09.684 --rc geninfo_all_blocks=1 00:05:09.684 --rc geninfo_unexecuted_blocks=1 00:05:09.684 00:05:09.684 ' 00:05:09.684 13:46:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:09.684 13:46:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2183378 00:05:09.684 13:46:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2183378 00:05:09.684 13:46:55 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:09.684 13:46:55 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 2183378 ']' 00:05:09.684 13:46:55 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.684 13:46:55 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:09.684 13:46:55 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.684 13:46:55 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:09.684 13:46:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:09.684 [2024-11-06 13:46:55.913476] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:05:09.684 [2024-11-06 13:46:55.913526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183378 ] 00:05:09.944 [2024-11-06 13:46:55.998681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.944 [2024-11-06 13:46:56.030601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.514 13:46:56 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:10.514 13:46:56 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:10.514 13:46:56 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:10.774 { 00:05:10.774 "version": "SPDK v25.01-pre git sha1 159fecd99", 00:05:10.774 "fields": { 00:05:10.774 "major": 25, 00:05:10.774 "minor": 1, 00:05:10.774 "patch": 0, 00:05:10.774 "suffix": "-pre", 00:05:10.774 "commit": "159fecd99" 00:05:10.774 } 00:05:10.774 } 00:05:10.774 13:46:56 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:10.774 13:46:56 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:10.774 13:46:56 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:10.774 13:46:56 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:10.774 13:46:56 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:10.774 13:46:56 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:10.774 13:46:56 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.774 13:46:56 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:10.774 13:46:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:10.774 13:46:56 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.774 13:46:56 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:10.774 13:46:56 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:10.774 13:46:56 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:10.774 13:46:56 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:10.774 13:46:56 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:10.774 13:46:56 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:10.774 13:46:56 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.774 13:46:56 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:10.774 13:46:56 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.774 13:46:56 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:10.774 13:46:56 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.774 13:46:56 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:10.774 13:46:56 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:10.774 13:46:56 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:11.034 request: 00:05:11.034 { 00:05:11.034 "method": "env_dpdk_get_mem_stats", 00:05:11.034 "req_id": 1 00:05:11.034 } 00:05:11.034 Got JSON-RPC error response 00:05:11.034 response: 00:05:11.034 { 00:05:11.034 "code": -32601, 00:05:11.034 "message": "Method not found" 00:05:11.034 } 00:05:11.034 13:46:57 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:11.034 13:46:57 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:11.034 13:46:57 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:11.034 13:46:57 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:11.034 13:46:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2183378 00:05:11.034 13:46:57 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 2183378 ']' 00:05:11.034 13:46:57 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 2183378 00:05:11.034 13:46:57 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:11.034 13:46:57 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:11.034 13:46:57 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2183378 00:05:11.034 13:46:57 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:11.034 13:46:57 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:11.034 13:46:57 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2183378' 00:05:11.034 killing process with pid 2183378 00:05:11.034 13:46:57 app_cmdline -- common/autotest_common.sh@971 -- # kill 2183378 00:05:11.034 13:46:57 app_cmdline -- common/autotest_common.sh@976 -- # wait 2183378 00:05:11.295 00:05:11.295 real 0m1.719s 00:05:11.295 user 0m2.063s 00:05:11.295 sys 0m0.466s 00:05:11.295 13:46:57 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:11.295 13:46:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:11.295 ************************************ 00:05:11.295 END TEST app_cmdline 00:05:11.295 ************************************ 00:05:11.295 13:46:57 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:11.295 13:46:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:11.295 13:46:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:11.295 13:46:57 -- common/autotest_common.sh@10 -- # set +x 00:05:11.295 ************************************ 00:05:11.295 START TEST version 00:05:11.295 ************************************ 00:05:11.295 13:46:57 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:11.295 * Looking for test storage... 00:05:11.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:11.295 13:46:57 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:11.296 13:46:57 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:11.296 13:46:57 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:11.556 13:46:57 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:11.556 13:46:57 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.556 13:46:57 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.556 13:46:57 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.556 13:46:57 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.556 13:46:57 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.556 13:46:57 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.556 13:46:57 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.556 13:46:57 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.556 13:46:57 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.556 13:46:57 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.556 13:46:57 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.556 13:46:57 version -- scripts/common.sh@344 -- # case "$op" in 00:05:11.556 13:46:57 version -- scripts/common.sh@345 -- # : 1 00:05:11.556 13:46:57 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.556 13:46:57 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.556 13:46:57 version -- scripts/common.sh@365 -- # decimal 1 00:05:11.556 13:46:57 version -- scripts/common.sh@353 -- # local d=1 00:05:11.556 13:46:57 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.556 13:46:57 version -- scripts/common.sh@355 -- # echo 1 00:05:11.556 13:46:57 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.556 13:46:57 version -- scripts/common.sh@366 -- # decimal 2 00:05:11.556 13:46:57 version -- scripts/common.sh@353 -- # local d=2 00:05:11.556 13:46:57 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.556 13:46:57 version -- scripts/common.sh@355 -- # echo 2 00:05:11.557 13:46:57 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.557 13:46:57 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.557 13:46:57 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.557 13:46:57 version -- scripts/common.sh@368 -- # return 0 00:05:11.557 13:46:57 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.557 13:46:57 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:11.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.557 --rc genhtml_branch_coverage=1 00:05:11.557 --rc genhtml_function_coverage=1 00:05:11.557 --rc genhtml_legend=1 00:05:11.557 --rc geninfo_all_blocks=1 00:05:11.557 --rc geninfo_unexecuted_blocks=1 00:05:11.557 00:05:11.557 ' 00:05:11.557 13:46:57 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:11.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.557 --rc genhtml_branch_coverage=1 00:05:11.557 --rc genhtml_function_coverage=1 00:05:11.557 --rc genhtml_legend=1 00:05:11.557 --rc geninfo_all_blocks=1 00:05:11.557 --rc geninfo_unexecuted_blocks=1 00:05:11.557 00:05:11.557 ' 00:05:11.557 13:46:57 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:11.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.557 --rc genhtml_branch_coverage=1 00:05:11.557 --rc genhtml_function_coverage=1 00:05:11.557 --rc genhtml_legend=1 00:05:11.557 --rc geninfo_all_blocks=1 00:05:11.557 --rc geninfo_unexecuted_blocks=1 00:05:11.557 00:05:11.557 ' 00:05:11.557 13:46:57 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:11.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.557 --rc genhtml_branch_coverage=1 00:05:11.557 --rc genhtml_function_coverage=1 00:05:11.557 --rc genhtml_legend=1 00:05:11.557 --rc geninfo_all_blocks=1 00:05:11.557 --rc geninfo_unexecuted_blocks=1 00:05:11.557 00:05:11.557 ' 00:05:11.557 13:46:57 version -- app/version.sh@17 -- # get_header_version major 00:05:11.557 13:46:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:11.557 13:46:57 version -- app/version.sh@14 -- # cut -f2 00:05:11.557 13:46:57 version -- app/version.sh@14 -- # tr -d '"' 00:05:11.557 13:46:57 version -- app/version.sh@17 -- # major=25 00:05:11.557 13:46:57 version -- app/version.sh@18 -- # get_header_version minor 00:05:11.557 13:46:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:11.557 13:46:57 version -- app/version.sh@14 -- # cut -f2 00:05:11.557 13:46:57 version -- app/version.sh@14 -- # tr -d '"' 00:05:11.557 13:46:57 version -- app/version.sh@18 -- # minor=1 00:05:11.557 13:46:57 version -- app/version.sh@19 -- # get_header_version patch 00:05:11.557 13:46:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:11.557 13:46:57 version -- app/version.sh@14 -- # cut -f2 00:05:11.557 13:46:57 version -- app/version.sh@14 -- # tr -d '"' 00:05:11.557 13:46:57 version -- app/version.sh@19 -- # patch=0 00:05:11.557 13:46:57 version -- app/version.sh@20 -- # get_header_version suffix 00:05:11.557 13:46:57 version -- app/version.sh@14 -- # cut -f2 00:05:11.557 13:46:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:11.557 13:46:57 version -- app/version.sh@14 -- # tr -d '"' 00:05:11.557 13:46:57 version -- app/version.sh@20 -- # suffix=-pre 00:05:11.557 13:46:57 version -- app/version.sh@22 -- # version=25.1 00:05:11.557 13:46:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:11.557 13:46:57 version -- app/version.sh@28 -- # version=25.1rc0 00:05:11.557 13:46:57 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:11.557 13:46:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:11.557 13:46:57 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:11.557 13:46:57 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:11.557 00:05:11.557 real 0m0.273s 00:05:11.557 user 0m0.184s 00:05:11.557 sys 0m0.137s 00:05:11.557 13:46:57 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:11.557 13:46:57 version -- common/autotest_common.sh@10 -- # set +x 00:05:11.557 ************************************ 00:05:11.557 END TEST version 00:05:11.557 ************************************ 00:05:11.557 13:46:57 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:11.557 13:46:57 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:11.557 13:46:57 -- spdk/autotest.sh@194 -- # uname -s 00:05:11.557 13:46:57 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:11.557 13:46:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:11.557 13:46:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:11.557 13:46:57 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:11.557 13:46:57 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:11.557 13:46:57 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:11.557 13:46:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.557 13:46:57 -- common/autotest_common.sh@10 -- # set +x 00:05:11.557 13:46:57 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:11.557 13:46:57 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:11.557 13:46:57 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:11.557 13:46:57 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:11.557 13:46:57 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:11.557 13:46:57 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:11.557 13:46:57 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:11.557 13:46:57 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:11.557 13:46:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:11.557 13:46:57 -- common/autotest_common.sh@10 -- # set +x 00:05:11.819 ************************************ 00:05:11.819 START TEST nvmf_tcp 00:05:11.819 ************************************ 00:05:11.819 13:46:57 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:11.819 * Looking for test storage... 00:05:11.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:11.819 13:46:57 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:11.819 13:46:57 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:11.819 13:46:57 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:11.819 13:46:58 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.819 13:46:58 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:11.819 13:46:58 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.819 13:46:58 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:11.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.819 --rc genhtml_branch_coverage=1 00:05:11.819 --rc genhtml_function_coverage=1 00:05:11.819 --rc genhtml_legend=1 00:05:11.819 --rc geninfo_all_blocks=1 00:05:11.819 --rc geninfo_unexecuted_blocks=1 00:05:11.819 00:05:11.819 ' 00:05:11.819 13:46:58 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:11.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.819 --rc genhtml_branch_coverage=1 00:05:11.819 --rc genhtml_function_coverage=1 00:05:11.819 --rc genhtml_legend=1 00:05:11.819 --rc geninfo_all_blocks=1 00:05:11.819 --rc geninfo_unexecuted_blocks=1 00:05:11.819 00:05:11.819 ' 00:05:11.819 13:46:58 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:11.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.819 --rc genhtml_branch_coverage=1 00:05:11.819 --rc genhtml_function_coverage=1 00:05:11.819 --rc genhtml_legend=1 00:05:11.819 --rc geninfo_all_blocks=1 00:05:11.819 --rc geninfo_unexecuted_blocks=1 00:05:11.819 00:05:11.819 ' 00:05:11.819 13:46:58 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:11.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.819 --rc genhtml_branch_coverage=1 00:05:11.819 --rc genhtml_function_coverage=1 00:05:11.819 --rc genhtml_legend=1 00:05:11.819 --rc geninfo_all_blocks=1 00:05:11.819 --rc geninfo_unexecuted_blocks=1 00:05:11.819 00:05:11.819 ' 00:05:11.819 13:46:58 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:11.819 13:46:58 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:11.819 13:46:58 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:11.819 13:46:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:11.819 13:46:58 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:11.819 13:46:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.819 ************************************ 00:05:11.819 START TEST nvmf_target_core 00:05:11.819 ************************************ 00:05:11.819 13:46:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:12.080 * Looking for test storage... 00:05:12.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:12.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.080 --rc genhtml_branch_coverage=1 00:05:12.080 --rc genhtml_function_coverage=1 00:05:12.080 --rc genhtml_legend=1 00:05:12.080 --rc geninfo_all_blocks=1 00:05:12.080 --rc geninfo_unexecuted_blocks=1 00:05:12.080 00:05:12.080 ' 00:05:12.080 13:46:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:12.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.080 --rc genhtml_branch_coverage=1 00:05:12.080 --rc genhtml_function_coverage=1 00:05:12.080 --rc genhtml_legend=1 00:05:12.080 --rc geninfo_all_blocks=1 00:05:12.081 --rc geninfo_unexecuted_blocks=1 00:05:12.081 00:05:12.081 ' 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:12.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.081 --rc genhtml_branch_coverage=1 00:05:12.081 --rc genhtml_function_coverage=1 00:05:12.081 --rc genhtml_legend=1 00:05:12.081 --rc geninfo_all_blocks=1 00:05:12.081 --rc geninfo_unexecuted_blocks=1 00:05:12.081 00:05:12.081 ' 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:12.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.081 --rc genhtml_branch_coverage=1 00:05:12.081 --rc genhtml_function_coverage=1 00:05:12.081 --rc genhtml_legend=1 00:05:12.081 --rc geninfo_all_blocks=1 00:05:12.081 --rc geninfo_unexecuted_blocks=1 00:05:12.081 00:05:12.081 ' 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:12.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:12.081 13:46:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:12.342 ************************************ 00:05:12.342 START TEST nvmf_abort 00:05:12.342 ************************************ 00:05:12.342 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:12.342 * Looking for test storage... 00:05:12.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:12.342 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:12.342 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:12.342 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:12.342 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:12.342 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:12.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.343 --rc genhtml_branch_coverage=1 00:05:12.343 --rc genhtml_function_coverage=1 00:05:12.343 --rc genhtml_legend=1 00:05:12.343 --rc geninfo_all_blocks=1 00:05:12.343 --rc geninfo_unexecuted_blocks=1 00:05:12.343 00:05:12.343 ' 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:12.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.343 --rc genhtml_branch_coverage=1 00:05:12.343 --rc genhtml_function_coverage=1 00:05:12.343 --rc genhtml_legend=1 00:05:12.343 --rc geninfo_all_blocks=1 00:05:12.343 --rc geninfo_unexecuted_blocks=1 00:05:12.343 00:05:12.343 ' 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:12.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.343 --rc genhtml_branch_coverage=1 00:05:12.343 --rc genhtml_function_coverage=1 00:05:12.343 --rc genhtml_legend=1 00:05:12.343 --rc geninfo_all_blocks=1 00:05:12.343 --rc geninfo_unexecuted_blocks=1 00:05:12.343 00:05:12.343 ' 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:12.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.343 --rc genhtml_branch_coverage=1 00:05:12.343 --rc genhtml_function_coverage=1 00:05:12.343 --rc genhtml_legend=1 00:05:12.343 --rc geninfo_all_blocks=1 00:05:12.343 --rc geninfo_unexecuted_blocks=1 00:05:12.343 00:05:12.343 ' 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.343 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:12.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:12.344 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:12.605 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:12.605 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:12.605 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:12.605 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:20.743 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:20.743 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:20.743 Found net devices under 0000:31:00.0: cvl_0_0 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:20.743 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:20.744 Found net devices under 0000:31:00.1: cvl_0_1 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:20.744 13:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:20.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:20.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:05:20.744 00:05:20.744 --- 10.0.0.2 ping statistics --- 00:05:20.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:20.744 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:20.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:20.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:05:20.744 00:05:20.744 --- 10.0.0.1 ping statistics --- 00:05:20.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:20.744 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2187899 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2187899 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 2187899 ']' 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:20.744 13:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:20.744 [2024-11-06 13:47:06.286345] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:05:20.744 [2024-11-06 13:47:06.286407] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:20.744 [2024-11-06 13:47:06.388864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.744 [2024-11-06 13:47:06.441827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:20.744 [2024-11-06 13:47:06.441884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:20.744 [2024-11-06 13:47:06.441894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:20.744 [2024-11-06 13:47:06.441901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:20.744 [2024-11-06 13:47:06.441907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:20.744 [2024-11-06 13:47:06.443780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.744 [2024-11-06 13:47:06.443979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.744 [2024-11-06 13:47:06.443980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.005 [2024-11-06 13:47:07.169283] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.005 Malloc0 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.005 Delay0 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.005 [2024-11-06 13:47:07.255227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.005 13:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:21.266 [2024-11-06 13:47:07.403990] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:23.177 Initializing NVMe Controllers 00:05:23.177 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:23.177 controller IO queue size 128 less than required 00:05:23.177 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:23.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:23.177 Initialization complete. Launching workers. 00:05:23.177 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28323 00:05:23.177 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28384, failed to submit 62 00:05:23.177 success 28327, unsuccessful 57, failed 0 00:05:23.177 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:23.177 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.177 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:23.177 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.177 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:23.177 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:23.177 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:23.177 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:23.438 rmmod nvme_tcp 00:05:23.438 rmmod nvme_fabrics 00:05:23.438 rmmod nvme_keyring 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2187899 ']' 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2187899 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 2187899 ']' 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 2187899 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2187899 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2187899' 00:05:23.438 killing process with pid 2187899 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 2187899 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 2187899 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:23.438 13:47:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:25.984 13:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:25.984 00:05:25.984 real 0m13.412s 00:05:25.984 user 0m13.740s 00:05:25.984 sys 0m6.634s 00:05:25.984 13:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:25.984 13:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.984 ************************************ 00:05:25.984 END TEST nvmf_abort 00:05:25.984 ************************************ 00:05:25.984 13:47:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:25.984 13:47:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:25.984 13:47:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.984 13:47:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:25.984 ************************************ 00:05:25.984 START TEST nvmf_ns_hotplug_stress 00:05:25.984 ************************************ 00:05:25.984 13:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:25.984 * Looking for test storage... 00:05:25.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:25.984 13:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:25.984 13:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:25.984 13:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:25.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.984 --rc genhtml_branch_coverage=1 00:05:25.984 --rc genhtml_function_coverage=1 00:05:25.984 --rc genhtml_legend=1 00:05:25.984 --rc geninfo_all_blocks=1 00:05:25.984 --rc geninfo_unexecuted_blocks=1 00:05:25.984 00:05:25.984 ' 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:25.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.984 --rc genhtml_branch_coverage=1 00:05:25.984 --rc genhtml_function_coverage=1 00:05:25.984 --rc genhtml_legend=1 00:05:25.984 --rc geninfo_all_blocks=1 00:05:25.984 --rc geninfo_unexecuted_blocks=1 00:05:25.984 00:05:25.984 ' 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:25.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.984 --rc genhtml_branch_coverage=1 00:05:25.984 --rc genhtml_function_coverage=1 00:05:25.984 --rc genhtml_legend=1 00:05:25.984 --rc geninfo_all_blocks=1 00:05:25.984 --rc geninfo_unexecuted_blocks=1 00:05:25.984 00:05:25.984 ' 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:25.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.984 --rc genhtml_branch_coverage=1 00:05:25.984 --rc genhtml_function_coverage=1 00:05:25.984 --rc genhtml_legend=1 00:05:25.984 --rc geninfo_all_blocks=1 00:05:25.984 --rc geninfo_unexecuted_blocks=1 00:05:25.984 00:05:25.984 ' 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.984 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:25.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:25.985 13:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:34.188 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:34.188 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:34.188 Found net devices under 0000:31:00.0: cvl_0_0 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:34.188 Found net devices under 0000:31:00.1: cvl_0_1 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:34.188 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:34.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:34.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:05:34.189 00:05:34.189 --- 10.0.0.2 ping statistics --- 00:05:34.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:34.189 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:34.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:34.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:05:34.189 00:05:34.189 --- 10.0.0.1 ping statistics --- 00:05:34.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:34.189 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2192892 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2192892 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 2192892 ']' 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:34.189 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:34.189 [2024-11-06 13:47:19.783239] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:05:34.189 [2024-11-06 13:47:19.783307] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:34.189 [2024-11-06 13:47:19.886268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.189 [2024-11-06 13:47:19.938525] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:34.189 [2024-11-06 13:47:19.938579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:34.189 [2024-11-06 13:47:19.938588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:34.189 [2024-11-06 13:47:19.938595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:34.189 [2024-11-06 13:47:19.938601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:34.189 [2024-11-06 13:47:19.940512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.189 [2024-11-06 13:47:19.940673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.189 [2024-11-06 13:47:19.940673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.451 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:34.451 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:05:34.451 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:34.451 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:34.451 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:34.451 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:34.451 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:34.451 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:34.713 [2024-11-06 13:47:20.816152] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.713 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:34.974 13:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:34.974 [2024-11-06 13:47:21.219231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:35.235 13:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:35.235 13:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:35.495 Malloc0 00:05:35.495 13:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:35.755 Delay0 00:05:35.756 13:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.016 13:47:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:36.016 NULL1 00:05:36.016 13:47:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:36.276 13:47:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:36.276 13:47:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2193361 00:05:36.276 13:47:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:36.276 13:47:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.537 13:47:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.797 13:47:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:36.797 13:47:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:36.797 true 00:05:36.797 13:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:36.797 13:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.057 13:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.317 13:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:37.317 13:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:37.317 true 00:05:37.317 13:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:37.317 13:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.576 13:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.836 13:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:37.836 13:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:38.096 true 00:05:38.096 13:47:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:38.096 13:47:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.096 13:47:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.356 13:47:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:38.356 13:47:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:38.616 true 00:05:38.616 13:47:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:38.616 13:47:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.616 13:47:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.877 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:38.877 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:39.137 true 00:05:39.137 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:39.137 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.396 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.396 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:39.396 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:39.656 true 00:05:39.656 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:39.656 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.917 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.917 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:39.917 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:40.177 true 00:05:40.177 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:40.177 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.437 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.697 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:40.697 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:40.697 true 00:05:40.697 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:40.697 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.956 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.217 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:41.217 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:41.217 true 00:05:41.217 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:41.217 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.476 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.736 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:41.736 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:41.736 true 00:05:41.997 13:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:41.997 13:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.997 13:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.257 13:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:42.257 13:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:42.517 true 00:05:42.517 13:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:42.517 13:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.517 13:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.776 13:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:42.776 13:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:43.037 true 00:05:43.037 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:43.037 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.297 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.297 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:43.297 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:43.557 true 00:05:43.557 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:43.557 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.816 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.075 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:44.075 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:44.075 true 00:05:44.075 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:44.075 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.335 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.595 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:44.595 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:44.595 true 00:05:44.595 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:44.595 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.855 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.114 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:45.114 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:45.114 true 00:05:45.374 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:45.374 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.374 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.633 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:45.633 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:45.893 true 00:05:45.893 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:45.893 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.893 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.153 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:46.153 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:46.413 true 00:05:46.413 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:46.413 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.673 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.673 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:46.673 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:46.934 true 00:05:46.934 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:46.934 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.194 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.453 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:47.454 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:47.454 true 00:05:47.454 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:47.454 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.714 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.973 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:47.974 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:47.974 true 00:05:47.974 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:47.974 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.238 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.498 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:48.498 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:48.498 true 00:05:48.758 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:48.758 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.758 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.019 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:49.019 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:49.279 true 00:05:49.279 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:49.279 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.279 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.539 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:49.539 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:49.798 true 00:05:49.798 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:49.798 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.058 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.058 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:50.058 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:50.318 true 00:05:50.318 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:50.318 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.578 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.578 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:50.578 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:50.838 true 00:05:50.838 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:50.838 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.098 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.358 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:51.358 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:51.358 true 00:05:51.358 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:51.358 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.628 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.888 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:51.888 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:51.888 true 00:05:51.888 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:51.888 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.148 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.408 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:52.409 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:52.409 true 00:05:52.669 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:52.669 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.669 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.929 13:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:52.929 13:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:53.189 true 00:05:53.189 13:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:53.189 13:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.189 13:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.448 13:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:53.448 13:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:53.708 true 00:05:53.708 13:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:53.708 13:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.968 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.968 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:53.968 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:54.228 true 00:05:54.228 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:54.228 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.490 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.490 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:54.490 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:54.750 true 00:05:54.750 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:54.750 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.011 13:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.271 13:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:55.271 13:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:55.271 true 00:05:55.271 13:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:55.271 13:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.532 13:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.792 13:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:55.792 13:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:55.792 true 00:05:55.792 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:55.792 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.052 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.312 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:56.312 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:56.572 true 00:05:56.572 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:56.572 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.572 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.832 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:56.832 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:57.093 true 00:05:57.093 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:57.093 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.354 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.354 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:57.354 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:57.614 true 00:05:57.615 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:57.615 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.874 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.874 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:57.874 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:58.134 true 00:05:58.134 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:58.134 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.394 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.394 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:58.394 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:58.653 true 00:05:58.653 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:58.653 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.912 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.173 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:59.173 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:59.173 true 00:05:59.173 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:59.173 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.433 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.693 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:59.693 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:59.693 true 00:05:59.953 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:05:59.953 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.953 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.213 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:00.213 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:00.473 true 00:06:00.473 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:06:00.473 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.473 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.733 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:00.733 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:00.994 true 00:06:00.994 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:06:00.994 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.994 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.255 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:01.255 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:01.516 true 00:06:01.516 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:06:01.516 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.777 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.777 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:01.777 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:02.037 true 00:06:02.038 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:06:02.038 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.297 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.558 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:02.558 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:02.558 true 00:06:02.558 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:06:02.558 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.822 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.082 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:03.082 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:03.082 true 00:06:03.082 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:06:03.082 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.341 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.601 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:03.601 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:03.601 true 00:06:03.859 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:06:03.859 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.860 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.120 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:04.120 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:04.380 true 00:06:04.380 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:06:04.380 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.380 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.640 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:04.640 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:04.900 true 00:06:04.900 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:06:04.900 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.160 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.160 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:05.160 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:05.420 true 00:06:05.420 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:06:05.420 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.680 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.680 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:06:05.680 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:06:05.940 true 00:06:05.940 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:06:05.940 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.200 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.459 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:06:06.459 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:06:06.459 true 00:06:06.459 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:06:06.459 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.719 Initializing NVMe Controllers 00:06:06.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:06.719 Controller IO queue size 128, less than required. 00:06:06.719 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:06.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:06.719 Initialization complete. Launching workers. 00:06:06.719 ======================================================== 00:06:06.719 Latency(us) 00:06:06.719 Device Information : IOPS MiB/s Average min max 00:06:06.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30615.07 14.95 4180.85 1183.27 11017.79 00:06:06.719 ======================================================== 00:06:06.719 Total : 30615.07 14.95 4180.85 1183.27 11017.79 00:06:06.719 00:06:06.719 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.978 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:06:06.978 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:06:06.978 true 00:06:06.978 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193361 00:06:06.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2193361) - No such process 00:06:06.978 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2193361 00:06:06.978 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.238 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.497 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:07.497 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:07.497 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:07.498 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:07.498 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:07.498 null0 00:06:07.498 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:07.498 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:07.498 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:07.757 null1 00:06:07.757 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:07.757 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:07.757 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:08.017 null2 00:06:08.017 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.017 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.017 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:08.017 null3 00:06:08.017 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.017 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.017 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:08.315 null4 00:06:08.315 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.315 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.315 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:08.618 null5 00:06:08.618 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.618 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.618 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:08.618 null6 00:06:08.618 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.618 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.618 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:08.915 null7 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.915 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2199917 2199918 2199920 2199922 2199924 2199926 2199928 2199930 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.916 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:09.176 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:09.176 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:09.176 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.176 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:09.176 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:09.176 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:09.176 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:09.176 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:09.176 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.176 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.176 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.176 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.176 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.176 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:09.176 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.176 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.176 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:09.436 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:09.437 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:09.696 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:09.956 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:09.956 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:09.956 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:09.956 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:09.956 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:09.956 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.956 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.956 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.956 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.956 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.956 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:09.956 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.956 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.956 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.956 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.956 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.956 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.956 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.956 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.956 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.215 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.215 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.215 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:10.215 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.215 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.215 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:10.215 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.215 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.215 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.215 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:10.215 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.215 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.215 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.215 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:10.215 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.215 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.476 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.739 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.000 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.261 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.261 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.261 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.261 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.261 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.261 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.261 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.261 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.261 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.261 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.261 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.261 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.261 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.261 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:11.261 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.261 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.261 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.261 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:11.521 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.780 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.780 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.780 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.780 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.780 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.780 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.780 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.780 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.780 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.780 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.780 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.780 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.780 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.780 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.780 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.780 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.780 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.780 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.780 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.780 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.040 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.301 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.302 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.302 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.302 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.302 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.302 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.302 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.302 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.563 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.563 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.563 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.563 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.563 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.563 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.563 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.563 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.563 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.563 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.563 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.563 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.563 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.563 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.563 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.563 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:12.824 rmmod nvme_tcp 00:06:12.824 rmmod nvme_fabrics 00:06:12.824 rmmod nvme_keyring 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2192892 ']' 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2192892 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 2192892 ']' 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 2192892 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:12.824 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2192892 00:06:12.824 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:12.824 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:12.824 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2192892' 00:06:12.824 killing process with pid 2192892 00:06:12.824 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 2192892 00:06:12.824 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 2192892 00:06:13.084 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:13.084 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:13.084 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:13.084 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:13.084 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:13.084 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:13.084 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:13.084 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:13.084 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:13.084 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.084 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:13.084 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:14.997 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:14.997 00:06:14.997 real 0m49.347s 00:06:14.997 user 3m20.118s 00:06:14.997 sys 0m17.634s 00:06:14.997 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:14.997 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.997 ************************************ 00:06:14.997 END TEST nvmf_ns_hotplug_stress 00:06:14.997 ************************************ 00:06:14.997 13:48:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:14.997 13:48:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:14.997 13:48:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:14.997 13:48:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:15.258 ************************************ 00:06:15.258 START TEST nvmf_delete_subsystem 00:06:15.258 ************************************ 00:06:15.258 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:15.258 * Looking for test storage... 00:06:15.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.258 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:15.258 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:15.258 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:15.258 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:15.258 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.258 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.258 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.258 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.258 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:15.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.259 --rc genhtml_branch_coverage=1 00:06:15.259 --rc genhtml_function_coverage=1 00:06:15.259 --rc genhtml_legend=1 00:06:15.259 --rc geninfo_all_blocks=1 00:06:15.259 --rc geninfo_unexecuted_blocks=1 00:06:15.259 00:06:15.259 ' 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:15.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.259 --rc genhtml_branch_coverage=1 00:06:15.259 --rc genhtml_function_coverage=1 00:06:15.259 --rc genhtml_legend=1 00:06:15.259 --rc geninfo_all_blocks=1 00:06:15.259 --rc geninfo_unexecuted_blocks=1 00:06:15.259 00:06:15.259 ' 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:15.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.259 --rc genhtml_branch_coverage=1 00:06:15.259 --rc genhtml_function_coverage=1 00:06:15.259 --rc genhtml_legend=1 00:06:15.259 --rc geninfo_all_blocks=1 00:06:15.259 --rc geninfo_unexecuted_blocks=1 00:06:15.259 00:06:15.259 ' 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:15.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.259 --rc genhtml_branch_coverage=1 00:06:15.259 --rc genhtml_function_coverage=1 00:06:15.259 --rc genhtml_legend=1 00:06:15.259 --rc geninfo_all_blocks=1 00:06:15.259 --rc geninfo_unexecuted_blocks=1 00:06:15.259 00:06:15.259 ' 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.259 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:15.260 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:15.260 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.260 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:15.260 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:15.260 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:15.260 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.260 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:15.260 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.520 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:15.520 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:15.520 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:15.520 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:23.663 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:23.663 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.663 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:23.664 Found net devices under 0000:31:00.0: cvl_0_0 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:23.664 Found net devices under 0000:31:00.1: cvl_0_1 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:23.664 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:23.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:23.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=1.01 ms 00:06:23.664 00:06:23.664 --- 10.0.0.2 ping statistics --- 00:06:23.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.664 rtt min/avg/max/mdev = 1.010/1.010/1.010/0.000 ms 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:23.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:23.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:06:23.664 00:06:23.664 --- 10.0.0.1 ping statistics --- 00:06:23.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.664 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2205353 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2205353 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 2205353 ']' 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:23.664 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.664 [2024-11-06 13:48:09.226972] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:06:23.664 [2024-11-06 13:48:09.227043] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.664 [2024-11-06 13:48:09.329232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.664 [2024-11-06 13:48:09.380915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:23.664 [2024-11-06 13:48:09.380970] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:23.664 [2024-11-06 13:48:09.380979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.664 [2024-11-06 13:48:09.380986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.664 [2024-11-06 13:48:09.380992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:23.664 [2024-11-06 13:48:09.382794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.664 [2024-11-06 13:48:09.382807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.925 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:23.925 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:06:23.925 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:23.925 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.925 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.925 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:23.925 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:23.925 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.925 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.925 [2024-11-06 13:48:10.112654] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.925 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.926 [2024-11-06 13:48:10.136988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.926 NULL1 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.926 Delay0 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2205824 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:23.926 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:24.187 [2024-11-06 13:48:10.264110] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:26.100 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:26.100 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.100 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 starting I/O failed: -6 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 starting I/O failed: -6 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 starting I/O failed: -6 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 starting I/O failed: -6 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 starting I/O failed: -6 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 starting I/O failed: -6 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 starting I/O failed: -6 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 starting I/O failed: -6 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 starting I/O failed: -6 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 starting I/O failed: -6 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 starting I/O failed: -6 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 [2024-11-06 13:48:12.388997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf61f00 is same with the state(6) to be set 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Read completed with error (sct=0, sc=8) 00:06:26.362 Write completed with error (sct=0, sc=8) 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 Read completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 Write completed with error (sct=0, sc=8) 00:06:26.363 starting I/O failed: -6 00:06:26.363 starting I/O failed: -6 00:06:26.363 starting I/O failed: -6 00:06:26.363 starting I/O failed: -6 00:06:27.307 [2024-11-06 13:48:13.364481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf635e0 is same with the state(6) to be set 00:06:27.307 Write completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Write completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Write completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Write completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Write completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Write completed with error (sct=0, sc=8) 00:06:27.307 [2024-11-06 13:48:13.392292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf620e0 is same with the state(6) to be set 00:06:27.307 Write completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Write completed with error (sct=0, sc=8) 00:06:27.307 Write completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Write completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 [2024-11-06 13:48:13.392688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf624a0 is same with the state(6) to be set 00:06:27.307 Write completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Write completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Write completed with error (sct=0, sc=8) 00:06:27.307 Write completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Write completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Write completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.307 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Write completed with error (sct=0, sc=8) 00:06:27.308 Write completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Write completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Write completed with error (sct=0, sc=8) 00:06:27.308 Write completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Write completed with error (sct=0, sc=8) 00:06:27.308 [2024-11-06 13:48:13.395506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7faff000d020 is same with the state(6) to be set 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Write completed with error (sct=0, sc=8) 00:06:27.308 Write completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Write completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Write completed with error (sct=0, sc=8) 00:06:27.308 Write completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Write completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Write completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Write completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Write completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Write completed with error (sct=0, sc=8) 00:06:27.308 Read completed with error (sct=0, sc=8) 00:06:27.308 Write completed with error (sct=0, sc=8) 00:06:27.308 Write completed with error (sct=0, sc=8) 00:06:27.308 [2024-11-06 13:48:13.395726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7faff000d7e0 is same with the state(6) to be set 00:06:27.308 Initializing NVMe Controllers 00:06:27.308 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:27.308 Controller IO queue size 128, less than required. 00:06:27.308 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:27.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:27.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:27.308 Initialization complete. Launching workers. 00:06:27.308 ======================================================== 00:06:27.308 Latency(us) 00:06:27.308 Device Information : IOPS MiB/s Average min max 00:06:27.308 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.82 0.08 906576.27 347.66 1006779.37 00:06:27.308 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.78 0.09 973154.18 405.53 2001838.62 00:06:27.308 ======================================================== 00:06:27.308 Total : 341.60 0.17 941029.85 347.66 2001838.62 00:06:27.308 00:06:27.308 [2024-11-06 13:48:13.396173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf635e0 (9): Bad file descriptor 00:06:27.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:27.308 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.308 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:27.308 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2205824 00:06:27.308 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2205824 00:06:27.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2205824) - No such process 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2205824 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2205824 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2205824 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.879 [2024-11-06 13:48:13.927013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2206733 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2206733 00:06:27.879 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.879 [2024-11-06 13:48:14.025337] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:28.451 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.451 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2206733 00:06:28.451 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.711 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.711 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2206733 00:06:28.711 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.281 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:29.281 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2206733 00:06:29.281 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.852 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:29.852 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2206733 00:06:29.852 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:30.422 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:30.422 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2206733 00:06:30.422 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:30.991 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:30.991 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2206733 00:06:30.991 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:31.250 Initializing NVMe Controllers 00:06:31.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:31.250 Controller IO queue size 128, less than required. 00:06:31.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:31.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:31.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:31.250 Initialization complete. Launching workers. 00:06:31.250 ======================================================== 00:06:31.251 Latency(us) 00:06:31.251 Device Information : IOPS MiB/s Average min max 00:06:31.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002429.44 1000134.08 1041919.94 00:06:31.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003125.14 1000242.61 1007824.99 00:06:31.251 ======================================================== 00:06:31.251 Total : 256.00 0.12 1002777.29 1000134.08 1041919.94 00:06:31.251 00:06:31.251 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:31.251 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2206733 00:06:31.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2206733) - No such process 00:06:31.251 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2206733 00:06:31.251 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:31.251 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:31.251 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:31.251 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:31.251 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:31.251 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:31.251 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:31.251 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:31.251 rmmod nvme_tcp 00:06:31.251 rmmod nvme_fabrics 00:06:31.251 rmmod nvme_keyring 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2205353 ']' 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2205353 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 2205353 ']' 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 2205353 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2205353 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2205353' 00:06:31.511 killing process with pid 2205353 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 2205353 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 2205353 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.511 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.054 13:48:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:34.054 00:06:34.054 real 0m18.507s 00:06:34.054 user 0m31.017s 00:06:34.054 sys 0m6.877s 00:06:34.054 13:48:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:34.054 13:48:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.054 ************************************ 00:06:34.054 END TEST nvmf_delete_subsystem 00:06:34.054 ************************************ 00:06:34.054 13:48:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:34.054 13:48:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:34.054 13:48:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:34.054 13:48:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:34.054 ************************************ 00:06:34.054 START TEST nvmf_host_management 00:06:34.054 ************************************ 00:06:34.054 13:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:34.054 * Looking for test storage... 00:06:34.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:34.054 13:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:34.054 13:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:34.054 13:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:34.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.054 --rc genhtml_branch_coverage=1 00:06:34.054 --rc genhtml_function_coverage=1 00:06:34.054 --rc genhtml_legend=1 00:06:34.054 --rc geninfo_all_blocks=1 00:06:34.054 --rc geninfo_unexecuted_blocks=1 00:06:34.054 00:06:34.054 ' 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:34.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.054 --rc genhtml_branch_coverage=1 00:06:34.054 --rc genhtml_function_coverage=1 00:06:34.054 --rc genhtml_legend=1 00:06:34.054 --rc geninfo_all_blocks=1 00:06:34.054 --rc geninfo_unexecuted_blocks=1 00:06:34.054 00:06:34.054 ' 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:34.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.054 --rc genhtml_branch_coverage=1 00:06:34.054 --rc genhtml_function_coverage=1 00:06:34.054 --rc genhtml_legend=1 00:06:34.054 --rc geninfo_all_blocks=1 00:06:34.054 --rc geninfo_unexecuted_blocks=1 00:06:34.054 00:06:34.054 ' 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:34.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.054 --rc genhtml_branch_coverage=1 00:06:34.054 --rc genhtml_function_coverage=1 00:06:34.054 --rc genhtml_legend=1 00:06:34.054 --rc geninfo_all_blocks=1 00:06:34.054 --rc geninfo_unexecuted_blocks=1 00:06:34.054 00:06:34.054 ' 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.054 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:34.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:34.055 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.191 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:42.192 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:42.192 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:42.192 Found net devices under 0000:31:00.0: cvl_0_0 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:42.192 Found net devices under 0000:31:00.1: cvl_0_1 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:42.192 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:42.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:06:42.193 00:06:42.193 --- 10.0.0.2 ping statistics --- 00:06:42.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.193 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:06:42.193 00:06:42.193 --- 10.0.0.1 ping statistics --- 00:06:42.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.193 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2211784 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2211784 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2211784 ']' 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:42.193 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.193 [2024-11-06 13:48:27.827416] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:06:42.193 [2024-11-06 13:48:27.827478] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.193 [2024-11-06 13:48:27.926643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.193 [2024-11-06 13:48:27.979625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.193 [2024-11-06 13:48:27.979675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.193 [2024-11-06 13:48:27.979683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.193 [2024-11-06 13:48:27.979690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.193 [2024-11-06 13:48:27.979697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.193 [2024-11-06 13:48:27.981789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.193 [2024-11-06 13:48:27.982007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.193 [2024-11-06 13:48:27.982166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:42.193 [2024-11-06 13:48:27.982167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.454 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:42.454 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:42.454 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:42.454 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:42.454 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.454 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.454 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:42.454 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.454 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.454 [2024-11-06 13:48:28.709458] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.454 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.454 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:42.454 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:42.454 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.454 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:42.454 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:42.716 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:42.716 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.716 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.716 Malloc0 00:06:42.716 [2024-11-06 13:48:28.789241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2212159 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2212159 /var/tmp/bdevperf.sock 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2212159 ']' 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:42.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:42.717 { 00:06:42.717 "params": { 00:06:42.717 "name": "Nvme$subsystem", 00:06:42.717 "trtype": "$TEST_TRANSPORT", 00:06:42.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:42.717 "adrfam": "ipv4", 00:06:42.717 "trsvcid": "$NVMF_PORT", 00:06:42.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:42.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:42.717 "hdgst": ${hdgst:-false}, 00:06:42.717 "ddgst": ${ddgst:-false} 00:06:42.717 }, 00:06:42.717 "method": "bdev_nvme_attach_controller" 00:06:42.717 } 00:06:42.717 EOF 00:06:42.717 )") 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:42.717 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:42.717 "params": { 00:06:42.717 "name": "Nvme0", 00:06:42.717 "trtype": "tcp", 00:06:42.717 "traddr": "10.0.0.2", 00:06:42.717 "adrfam": "ipv4", 00:06:42.717 "trsvcid": "4420", 00:06:42.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:42.717 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:42.717 "hdgst": false, 00:06:42.717 "ddgst": false 00:06:42.717 }, 00:06:42.717 "method": "bdev_nvme_attach_controller" 00:06:42.717 }' 00:06:42.717 [2024-11-06 13:48:28.900310] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:06:42.717 [2024-11-06 13:48:28.900383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2212159 ] 00:06:42.977 [2024-11-06 13:48:28.995674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.977 [2024-11-06 13:48:29.049576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.237 Running I/O for 10 seconds... 00:06:43.497 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:43.497 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:43.497 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:43.497 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.497 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.497 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.497 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:43.497 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:43.497 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:43.497 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:43.497 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:43.497 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:43.497 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:43.497 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:43.497 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:43.497 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:43.497 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.497 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.759 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.759 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=713 00:06:43.759 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 713 -ge 100 ']' 00:06:43.759 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:43.759 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:43.759 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:43.759 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:43.759 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.759 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.759 [2024-11-06 13:48:29.809095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bf910 is same with the state(6) to be set 00:06:43.760 [2024-11-06 13:48:29.809694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.760 [2024-11-06 13:48:29.809763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.760 [2024-11-06 13:48:29.809788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.760 [2024-11-06 13:48:29.809797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.760 [2024-11-06 13:48:29.809807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.760 [2024-11-06 13:48:29.809816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.760 [2024-11-06 13:48:29.809826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.760 [2024-11-06 13:48:29.809834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.760 [2024-11-06 13:48:29.809843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.760 [2024-11-06 13:48:29.809851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.760 [2024-11-06 13:48:29.809861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.760 [2024-11-06 13:48:29.809869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.760 [2024-11-06 13:48:29.809879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.760 [2024-11-06 13:48:29.809887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.760 [2024-11-06 13:48:29.809897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.760 [2024-11-06 13:48:29.809905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.760 [2024-11-06 13:48:29.809915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.760 [2024-11-06 13:48:29.809922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.760 [2024-11-06 13:48:29.809949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.760 [2024-11-06 13:48:29.809957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.760 [2024-11-06 13:48:29.809968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.809975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.809985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.809992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.761 [2024-11-06 13:48:29.810562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.761 [2024-11-06 13:48:29.810573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.762 [2024-11-06 13:48:29.810891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.762 [2024-11-06 13:48:29.810901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ac60 is same with the state(6) to be set 00:06:43.762 [2024-11-06 13:48:29.812223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:43.762 task offset: 106496 on job bdev=Nvme0n1 fails 00:06:43.762 00:06:43.762 Latency(us) 00:06:43.762 [2024-11-06T12:48:30.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:43.762 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:43.762 Job: Nvme0n1 ended in about 0.55 seconds with error 00:06:43.762 Verification LBA range: start 0x0 length 0x400 00:06:43.762 Nvme0n1 : 0.55 1419.79 88.74 117.40 0.00 40581.68 7045.12 37137.07 00:06:43.762 [2024-11-06T12:48:30.042Z] =================================================================================================================== 00:06:43.762 [2024-11-06T12:48:30.042Z] Total : 1419.79 88.74 117.40 0.00 40581.68 7045.12 37137.07 00:06:43.762 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.762 [2024-11-06 13:48:29.814481] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.762 [2024-11-06 13:48:29.814522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15fa280 (9): Bad file descriptor 00:06:43.762 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:43.762 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.762 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.762 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.762 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:43.762 [2024-11-06 13:48:29.918978] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:44.705 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2212159 00:06:44.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2212159) - No such process 00:06:44.705 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:44.705 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:44.705 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:44.705 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:44.705 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:44.705 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:44.705 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:44.705 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:44.705 { 00:06:44.705 "params": { 00:06:44.705 "name": "Nvme$subsystem", 00:06:44.705 "trtype": "$TEST_TRANSPORT", 00:06:44.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:44.705 "adrfam": "ipv4", 00:06:44.705 "trsvcid": "$NVMF_PORT", 00:06:44.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:44.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:44.705 "hdgst": ${hdgst:-false}, 00:06:44.705 "ddgst": ${ddgst:-false} 00:06:44.705 }, 00:06:44.705 "method": "bdev_nvme_attach_controller" 00:06:44.705 } 00:06:44.705 EOF 00:06:44.705 )") 00:06:44.705 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:44.705 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:44.705 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:44.705 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:44.705 "params": { 00:06:44.705 "name": "Nvme0", 00:06:44.705 "trtype": "tcp", 00:06:44.705 "traddr": "10.0.0.2", 00:06:44.705 "adrfam": "ipv4", 00:06:44.705 "trsvcid": "4420", 00:06:44.705 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:44.706 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:44.706 "hdgst": false, 00:06:44.706 "ddgst": false 00:06:44.706 }, 00:06:44.706 "method": "bdev_nvme_attach_controller" 00:06:44.706 }' 00:06:44.706 [2024-11-06 13:48:30.890370] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:06:44.706 [2024-11-06 13:48:30.890445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2212510 ] 00:06:44.706 [2024-11-06 13:48:30.983133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.966 [2024-11-06 13:48:31.018915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.226 Running I/O for 1 seconds... 00:06:46.166 1799.00 IOPS, 112.44 MiB/s 00:06:46.166 Latency(us) 00:06:46.166 [2024-11-06T12:48:32.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:46.166 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:46.166 Verification LBA range: start 0x0 length 0x400 00:06:46.166 Nvme0n1 : 1.01 1850.75 115.67 0.00 0.00 33864.20 1181.01 30146.56 00:06:46.166 [2024-11-06T12:48:32.446Z] =================================================================================================================== 00:06:46.166 [2024-11-06T12:48:32.446Z] Total : 1850.75 115.67 0.00 0.00 33864.20 1181.01 30146.56 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:46.426 rmmod nvme_tcp 00:06:46.426 rmmod nvme_fabrics 00:06:46.426 rmmod nvme_keyring 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2211784 ']' 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2211784 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 2211784 ']' 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 2211784 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2211784 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2211784' 00:06:46.426 killing process with pid 2211784 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 2211784 00:06:46.426 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 2211784 00:06:46.686 [2024-11-06 13:48:32.743447] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:46.686 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:46.686 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:46.686 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:46.686 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:46.686 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:46.686 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:46.686 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:46.686 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:46.686 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:46.686 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.686 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.686 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.597 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:48.597 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:48.597 00:06:48.597 real 0m14.968s 00:06:48.597 user 0m24.089s 00:06:48.597 sys 0m6.910s 00:06:48.597 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.597 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.597 ************************************ 00:06:48.597 END TEST nvmf_host_management 00:06:48.597 ************************************ 00:06:48.857 13:48:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:48.857 13:48:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:48.857 13:48:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.857 13:48:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:48.857 ************************************ 00:06:48.857 START TEST nvmf_lvol 00:06:48.857 ************************************ 00:06:48.857 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:48.857 * Looking for test storage... 00:06:48.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:48.857 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:48.857 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:48.857 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:48.857 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:48.857 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.857 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.857 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.857 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.857 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.857 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.857 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.857 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.857 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:48.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.858 --rc genhtml_branch_coverage=1 00:06:48.858 --rc genhtml_function_coverage=1 00:06:48.858 --rc genhtml_legend=1 00:06:48.858 --rc geninfo_all_blocks=1 00:06:48.858 --rc geninfo_unexecuted_blocks=1 00:06:48.858 00:06:48.858 ' 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:48.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.858 --rc genhtml_branch_coverage=1 00:06:48.858 --rc genhtml_function_coverage=1 00:06:48.858 --rc genhtml_legend=1 00:06:48.858 --rc geninfo_all_blocks=1 00:06:48.858 --rc geninfo_unexecuted_blocks=1 00:06:48.858 00:06:48.858 ' 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:48.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.858 --rc genhtml_branch_coverage=1 00:06:48.858 --rc genhtml_function_coverage=1 00:06:48.858 --rc genhtml_legend=1 00:06:48.858 --rc geninfo_all_blocks=1 00:06:48.858 --rc geninfo_unexecuted_blocks=1 00:06:48.858 00:06:48.858 ' 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:48.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.858 --rc genhtml_branch_coverage=1 00:06:48.858 --rc genhtml_function_coverage=1 00:06:48.858 --rc genhtml_legend=1 00:06:48.858 --rc geninfo_all_blocks=1 00:06:48.858 --rc geninfo_unexecuted_blocks=1 00:06:48.858 00:06:48.858 ' 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.858 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:49.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:49.119 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:57.253 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:57.253 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:57.253 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:57.254 Found net devices under 0000:31:00.0: cvl_0_0 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:57.254 Found net devices under 0000:31:00.1: cvl_0_1 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:57.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:57.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.726 ms 00:06:57.254 00:06:57.254 --- 10.0.0.2 ping statistics --- 00:06:57.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.254 rtt min/avg/max/mdev = 0.726/0.726/0.726/0.000 ms 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:57.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:57.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:06:57.254 00:06:57.254 --- 10.0.0.1 ping statistics --- 00:06:57.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.254 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2217232 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2217232 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 2217232 ']' 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:57.254 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:57.254 [2024-11-06 13:48:42.879327] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:06:57.254 [2024-11-06 13:48:42.879392] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.254 [2024-11-06 13:48:42.983541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.254 [2024-11-06 13:48:43.035321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:57.254 [2024-11-06 13:48:43.035373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:57.254 [2024-11-06 13:48:43.035382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:57.254 [2024-11-06 13:48:43.035389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:57.254 [2024-11-06 13:48:43.035395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:57.254 [2024-11-06 13:48:43.037318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.254 [2024-11-06 13:48:43.037480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.254 [2024-11-06 13:48:43.037480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.515 13:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:57.515 13:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:06:57.515 13:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:57.515 13:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:57.515 13:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:57.515 13:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:57.515 13:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:57.775 [2024-11-06 13:48:43.918415] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.775 13:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:58.036 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:58.036 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:58.296 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:58.296 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:58.557 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:58.557 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c39e255a-d923-4c07-b0c1-553c0954d93e 00:06:58.557 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c39e255a-d923-4c07-b0c1-553c0954d93e lvol 20 00:06:58.818 13:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b4d7c60b-ecae-417b-a4e1-cc6aff21f5fc 00:06:58.818 13:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:59.078 13:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b4d7c60b-ecae-417b-a4e1-cc6aff21f5fc 00:06:59.339 13:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:59.339 [2024-11-06 13:48:45.571027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.339 13:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:59.600 13:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2217725 00:06:59.600 13:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:59.600 13:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:00.540 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b4d7c60b-ecae-417b-a4e1-cc6aff21f5fc MY_SNAPSHOT 00:07:00.801 13:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f57e3c63-f2d3-4198-80a0-baf56905515f 00:07:00.801 13:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b4d7c60b-ecae-417b-a4e1-cc6aff21f5fc 30 00:07:01.062 13:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f57e3c63-f2d3-4198-80a0-baf56905515f MY_CLONE 00:07:01.323 13:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=22898a65-5177-4dea-aefb-b72d424ce7e8 00:07:01.323 13:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 22898a65-5177-4dea-aefb-b72d424ce7e8 00:07:01.583 13:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2217725 00:07:11.646 Initializing NVMe Controllers 00:07:11.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:11.646 Controller IO queue size 128, less than required. 00:07:11.646 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:11.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:11.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:11.646 Initialization complete. Launching workers. 00:07:11.646 ======================================================== 00:07:11.646 Latency(us) 00:07:11.646 Device Information : IOPS MiB/s Average min max 00:07:11.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16190.50 63.24 7908.55 1470.76 38143.68 00:07:11.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17231.10 67.31 7430.29 1767.51 66160.04 00:07:11.646 ======================================================== 00:07:11.646 Total : 33421.60 130.55 7661.97 1470.76 66160.04 00:07:11.646 00:07:11.646 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:11.646 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b4d7c60b-ecae-417b-a4e1-cc6aff21f5fc 00:07:11.646 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c39e255a-d923-4c07-b0c1-553c0954d93e 00:07:11.646 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:11.646 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:11.646 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:11.646 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:11.646 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:11.646 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:11.646 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:11.646 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:11.647 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:11.647 rmmod nvme_tcp 00:07:11.647 rmmod nvme_fabrics 00:07:11.647 rmmod nvme_keyring 00:07:11.647 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:11.647 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:11.647 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:11.647 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2217232 ']' 00:07:11.647 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2217232 00:07:11.647 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 2217232 ']' 00:07:11.647 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 2217232 00:07:11.647 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:07:11.647 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:11.647 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2217232 00:07:11.647 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:11.647 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:11.647 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2217232' 00:07:11.647 killing process with pid 2217232 00:07:11.647 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 2217232 00:07:11.647 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 2217232 00:07:11.647 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:11.647 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:11.647 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:11.647 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:11.647 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:11.647 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:11.647 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:11.647 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:11.647 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:11.647 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.647 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.647 13:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.099 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:13.099 00:07:13.099 real 0m24.249s 00:07:13.099 user 1m5.467s 00:07:13.099 sys 0m8.740s 00:07:13.099 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:13.099 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:13.099 ************************************ 00:07:13.099 END TEST nvmf_lvol 00:07:13.099 ************************************ 00:07:13.099 13:48:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:13.099 13:48:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:13.099 13:48:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:13.099 13:48:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.099 ************************************ 00:07:13.099 START TEST nvmf_lvs_grow 00:07:13.099 ************************************ 00:07:13.099 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:13.099 * Looking for test storage... 00:07:13.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.099 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:13.099 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:13.099 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.361 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:13.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.361 --rc genhtml_branch_coverage=1 00:07:13.361 --rc genhtml_function_coverage=1 00:07:13.361 --rc genhtml_legend=1 00:07:13.361 --rc geninfo_all_blocks=1 00:07:13.361 --rc geninfo_unexecuted_blocks=1 00:07:13.361 00:07:13.361 ' 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:13.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.362 --rc genhtml_branch_coverage=1 00:07:13.362 --rc genhtml_function_coverage=1 00:07:13.362 --rc genhtml_legend=1 00:07:13.362 --rc geninfo_all_blocks=1 00:07:13.362 --rc geninfo_unexecuted_blocks=1 00:07:13.362 00:07:13.362 ' 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:13.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.362 --rc genhtml_branch_coverage=1 00:07:13.362 --rc genhtml_function_coverage=1 00:07:13.362 --rc genhtml_legend=1 00:07:13.362 --rc geninfo_all_blocks=1 00:07:13.362 --rc geninfo_unexecuted_blocks=1 00:07:13.362 00:07:13.362 ' 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:13.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.362 --rc genhtml_branch_coverage=1 00:07:13.362 --rc genhtml_function_coverage=1 00:07:13.362 --rc genhtml_legend=1 00:07:13.362 --rc geninfo_all_blocks=1 00:07:13.362 --rc geninfo_unexecuted_blocks=1 00:07:13.362 00:07:13.362 ' 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:13.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:13.362 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:21.499 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:21.499 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:21.499 Found net devices under 0000:31:00.0: cvl_0_0 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:21.499 Found net devices under 0000:31:00.1: cvl_0_1 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:21.499 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:21.500 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:21.500 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:21.500 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:21.500 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:21.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:07:21.500 00:07:21.500 --- 10.0.0.2 ping statistics --- 00:07:21.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.500 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:21.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:07:21.500 00:07:21.500 --- 10.0.0.1 ping statistics --- 00:07:21.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.500 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2224324 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2224324 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 2224324 ']' 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:21.500 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.500 [2024-11-06 13:49:07.140648] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:07:21.500 [2024-11-06 13:49:07.140718] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.500 [2024-11-06 13:49:07.224102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.500 [2024-11-06 13:49:07.276295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.500 [2024-11-06 13:49:07.276346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.500 [2024-11-06 13:49:07.276355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.500 [2024-11-06 13:49:07.276361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.500 [2024-11-06 13:49:07.276367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.500 [2024-11-06 13:49:07.277200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.761 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:21.761 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:21.761 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:21.761 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:21.761 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.761 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.761 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:22.021 [2024-11-06 13:49:08.164563] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.022 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:22.022 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:22.022 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:22.022 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:22.022 ************************************ 00:07:22.022 START TEST lvs_grow_clean 00:07:22.022 ************************************ 00:07:22.022 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:22.022 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:22.022 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:22.022 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:22.022 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:22.022 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:22.022 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:22.022 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:22.022 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:22.022 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:22.282 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:22.282 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:22.543 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1391086c-46cd-49bd-a320-5dd6e946462c 00:07:22.543 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1391086c-46cd-49bd-a320-5dd6e946462c 00:07:22.543 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:22.802 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:22.803 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:22.803 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1391086c-46cd-49bd-a320-5dd6e946462c lvol 150 00:07:22.803 13:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=22638d5e-5e20-4190-8af5-ade5e0a21d3e 00:07:22.803 13:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:22.803 13:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:23.063 [2024-11-06 13:49:09.201336] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:23.063 [2024-11-06 13:49:09.201405] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:23.063 true 00:07:23.063 13:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1391086c-46cd-49bd-a320-5dd6e946462c 00:07:23.063 13:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:23.325 13:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:23.325 13:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:23.325 13:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 22638d5e-5e20-4190-8af5-ade5e0a21d3e 00:07:23.585 13:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:23.845 [2024-11-06 13:49:09.923628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.845 13:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:24.106 13:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2224840 00:07:24.106 13:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:24.106 13:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2224840 /var/tmp/bdevperf.sock 00:07:24.106 13:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 2224840 ']' 00:07:24.106 13:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:24.106 13:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:24.106 13:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:24.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:24.106 13:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:24.106 13:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:24.106 13:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:24.106 [2024-11-06 13:49:10.180533] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:07:24.106 [2024-11-06 13:49:10.180608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224840 ] 00:07:24.106 [2024-11-06 13:49:10.271589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.106 [2024-11-06 13:49:10.325051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.048 13:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:25.048 13:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:25.048 13:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:25.308 Nvme0n1 00:07:25.308 13:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:25.569 [ 00:07:25.569 { 00:07:25.569 "name": "Nvme0n1", 00:07:25.569 "aliases": [ 00:07:25.569 "22638d5e-5e20-4190-8af5-ade5e0a21d3e" 00:07:25.569 ], 00:07:25.569 "product_name": "NVMe disk", 00:07:25.569 "block_size": 4096, 00:07:25.569 "num_blocks": 38912, 00:07:25.569 "uuid": "22638d5e-5e20-4190-8af5-ade5e0a21d3e", 00:07:25.569 "numa_id": 0, 00:07:25.569 "assigned_rate_limits": { 00:07:25.569 "rw_ios_per_sec": 0, 00:07:25.569 "rw_mbytes_per_sec": 0, 00:07:25.569 "r_mbytes_per_sec": 0, 00:07:25.569 "w_mbytes_per_sec": 0 00:07:25.569 }, 00:07:25.569 "claimed": false, 00:07:25.569 "zoned": false, 00:07:25.569 "supported_io_types": { 00:07:25.569 "read": true, 00:07:25.569 "write": true, 00:07:25.569 "unmap": true, 00:07:25.569 "flush": true, 00:07:25.569 "reset": true, 00:07:25.569 "nvme_admin": true, 00:07:25.569 "nvme_io": true, 00:07:25.569 "nvme_io_md": false, 00:07:25.569 "write_zeroes": true, 00:07:25.569 "zcopy": false, 00:07:25.569 "get_zone_info": false, 00:07:25.569 "zone_management": false, 00:07:25.569 "zone_append": false, 00:07:25.569 "compare": true, 00:07:25.569 "compare_and_write": true, 00:07:25.569 "abort": true, 00:07:25.569 "seek_hole": false, 00:07:25.569 "seek_data": false, 00:07:25.569 "copy": true, 00:07:25.569 "nvme_iov_md": false 00:07:25.569 }, 00:07:25.569 "memory_domains": [ 00:07:25.569 { 00:07:25.569 "dma_device_id": "system", 00:07:25.569 "dma_device_type": 1 00:07:25.569 } 00:07:25.569 ], 00:07:25.569 "driver_specific": { 00:07:25.569 "nvme": [ 00:07:25.569 { 00:07:25.569 "trid": { 00:07:25.569 "trtype": "TCP", 00:07:25.569 "adrfam": "IPv4", 00:07:25.569 "traddr": "10.0.0.2", 00:07:25.569 "trsvcid": "4420", 00:07:25.569 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:25.569 }, 00:07:25.569 "ctrlr_data": { 00:07:25.569 "cntlid": 1, 00:07:25.569 "vendor_id": "0x8086", 00:07:25.569 "model_number": "SPDK bdev Controller", 00:07:25.569 "serial_number": "SPDK0", 00:07:25.569 "firmware_revision": "25.01", 00:07:25.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:25.569 "oacs": { 00:07:25.569 "security": 0, 00:07:25.569 "format": 0, 00:07:25.569 "firmware": 0, 00:07:25.569 "ns_manage": 0 00:07:25.569 }, 00:07:25.569 "multi_ctrlr": true, 00:07:25.569 "ana_reporting": false 00:07:25.569 }, 00:07:25.569 "vs": { 00:07:25.569 "nvme_version": "1.3" 00:07:25.569 }, 00:07:25.569 "ns_data": { 00:07:25.569 "id": 1, 00:07:25.569 "can_share": true 00:07:25.569 } 00:07:25.569 } 00:07:25.569 ], 00:07:25.569 "mp_policy": "active_passive" 00:07:25.569 } 00:07:25.569 } 00:07:25.569 ] 00:07:25.569 13:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2225085 00:07:25.569 13:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:25.569 13:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:25.569 Running I/O for 10 seconds... 00:07:26.512 Latency(us) 00:07:26.512 [2024-11-06T12:49:12.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.512 Nvme0n1 : 1.00 25026.00 97.76 0.00 0.00 0.00 0.00 0.00 00:07:26.512 [2024-11-06T12:49:12.792Z] =================================================================================================================== 00:07:26.512 [2024-11-06T12:49:12.792Z] Total : 25026.00 97.76 0.00 0.00 0.00 0.00 0.00 00:07:26.512 00:07:27.452 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1391086c-46cd-49bd-a320-5dd6e946462c 00:07:27.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.452 Nvme0n1 : 2.00 25230.00 98.55 0.00 0.00 0.00 0.00 0.00 00:07:27.452 [2024-11-06T12:49:13.732Z] =================================================================================================================== 00:07:27.452 [2024-11-06T12:49:13.732Z] Total : 25230.00 98.55 0.00 0.00 0.00 0.00 0.00 00:07:27.452 00:07:27.712 true 00:07:27.712 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1391086c-46cd-49bd-a320-5dd6e946462c 00:07:27.712 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:27.713 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:27.713 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:27.713 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2225085 00:07:28.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.652 Nvme0n1 : 3.00 25320.33 98.91 0.00 0.00 0.00 0.00 0.00 00:07:28.652 [2024-11-06T12:49:14.932Z] =================================================================================================================== 00:07:28.652 [2024-11-06T12:49:14.932Z] Total : 25320.33 98.91 0.00 0.00 0.00 0.00 0.00 00:07:28.652 00:07:29.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.592 Nvme0n1 : 4.00 25377.75 99.13 0.00 0.00 0.00 0.00 0.00 00:07:29.592 [2024-11-06T12:49:15.872Z] =================================================================================================================== 00:07:29.592 [2024-11-06T12:49:15.872Z] Total : 25377.75 99.13 0.00 0.00 0.00 0.00 0.00 00:07:29.592 00:07:30.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.532 Nvme0n1 : 5.00 25421.80 99.30 0.00 0.00 0.00 0.00 0.00 00:07:30.532 [2024-11-06T12:49:16.812Z] =================================================================================================================== 00:07:30.532 [2024-11-06T12:49:16.812Z] Total : 25421.80 99.30 0.00 0.00 0.00 0.00 0.00 00:07:30.532 00:07:31.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.470 Nvme0n1 : 6.00 25451.17 99.42 0.00 0.00 0.00 0.00 0.00 00:07:31.470 [2024-11-06T12:49:17.750Z] =================================================================================================================== 00:07:31.470 [2024-11-06T12:49:17.750Z] Total : 25451.17 99.42 0.00 0.00 0.00 0.00 0.00 00:07:31.470 00:07:32.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.851 Nvme0n1 : 7.00 25471.43 99.50 0.00 0.00 0.00 0.00 0.00 00:07:32.851 [2024-11-06T12:49:19.131Z] =================================================================================================================== 00:07:32.851 [2024-11-06T12:49:19.131Z] Total : 25471.43 99.50 0.00 0.00 0.00 0.00 0.00 00:07:32.851 00:07:33.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.791 Nvme0n1 : 8.00 25494.75 99.59 0.00 0.00 0.00 0.00 0.00 00:07:33.791 [2024-11-06T12:49:20.071Z] =================================================================================================================== 00:07:33.791 [2024-11-06T12:49:20.071Z] Total : 25494.75 99.59 0.00 0.00 0.00 0.00 0.00 00:07:33.791 00:07:34.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.732 Nvme0n1 : 9.00 25505.89 99.63 0.00 0.00 0.00 0.00 0.00 00:07:34.732 [2024-11-06T12:49:21.012Z] =================================================================================================================== 00:07:34.732 [2024-11-06T12:49:21.012Z] Total : 25505.89 99.63 0.00 0.00 0.00 0.00 0.00 00:07:34.732 00:07:35.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.671 Nvme0n1 : 10.00 25514.90 99.67 0.00 0.00 0.00 0.00 0.00 00:07:35.671 [2024-11-06T12:49:21.951Z] =================================================================================================================== 00:07:35.671 [2024-11-06T12:49:21.951Z] Total : 25514.90 99.67 0.00 0.00 0.00 0.00 0.00 00:07:35.671 00:07:35.671 00:07:35.671 Latency(us) 00:07:35.671 [2024-11-06T12:49:21.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.671 Nvme0n1 : 10.00 25517.98 99.68 0.00 0.00 5012.90 2498.56 13107.20 00:07:35.671 [2024-11-06T12:49:21.951Z] =================================================================================================================== 00:07:35.671 [2024-11-06T12:49:21.951Z] Total : 25517.98 99.68 0.00 0.00 5012.90 2498.56 13107.20 00:07:35.671 { 00:07:35.671 "results": [ 00:07:35.671 { 00:07:35.671 "job": "Nvme0n1", 00:07:35.671 "core_mask": "0x2", 00:07:35.671 "workload": "randwrite", 00:07:35.671 "status": "finished", 00:07:35.671 "queue_depth": 128, 00:07:35.671 "io_size": 4096, 00:07:35.671 "runtime": 10.003811, 00:07:35.671 "iops": 25517.97509968951, 00:07:35.671 "mibps": 99.67959023316214, 00:07:35.671 "io_failed": 0, 00:07:35.671 "io_timeout": 0, 00:07:35.671 "avg_latency_us": 5012.896770384066, 00:07:35.671 "min_latency_us": 2498.56, 00:07:35.671 "max_latency_us": 13107.2 00:07:35.671 } 00:07:35.671 ], 00:07:35.671 "core_count": 1 00:07:35.671 } 00:07:35.671 13:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2224840 00:07:35.671 13:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 2224840 ']' 00:07:35.671 13:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 2224840 00:07:35.671 13:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:35.671 13:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:35.671 13:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2224840 00:07:35.671 13:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:35.671 13:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:35.671 13:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2224840' 00:07:35.671 killing process with pid 2224840 00:07:35.671 13:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 2224840 00:07:35.671 Received shutdown signal, test time was about 10.000000 seconds 00:07:35.671 00:07:35.671 Latency(us) 00:07:35.671 [2024-11-06T12:49:21.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.671 [2024-11-06T12:49:21.951Z] =================================================================================================================== 00:07:35.671 [2024-11-06T12:49:21.951Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:35.671 13:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 2224840 00:07:35.671 13:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:35.931 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:36.191 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1391086c-46cd-49bd-a320-5dd6e946462c 00:07:36.191 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:36.452 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:36.452 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:36.452 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:36.452 [2024-11-06 13:49:22.664068] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:36.452 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1391086c-46cd-49bd-a320-5dd6e946462c 00:07:36.452 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:36.452 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1391086c-46cd-49bd-a320-5dd6e946462c 00:07:36.452 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:36.452 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.452 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:36.452 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.452 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:36.452 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.452 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:36.452 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:36.452 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1391086c-46cd-49bd-a320-5dd6e946462c 00:07:36.712 request: 00:07:36.712 { 00:07:36.712 "uuid": "1391086c-46cd-49bd-a320-5dd6e946462c", 00:07:36.712 "method": "bdev_lvol_get_lvstores", 00:07:36.712 "req_id": 1 00:07:36.712 } 00:07:36.712 Got JSON-RPC error response 00:07:36.712 response: 00:07:36.712 { 00:07:36.712 "code": -19, 00:07:36.713 "message": "No such device" 00:07:36.713 } 00:07:36.713 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:36.713 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:36.713 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:36.713 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:36.713 13:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:36.973 aio_bdev 00:07:36.973 13:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 22638d5e-5e20-4190-8af5-ade5e0a21d3e 00:07:36.973 13:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=22638d5e-5e20-4190-8af5-ade5e0a21d3e 00:07:36.973 13:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:36.973 13:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:36.973 13:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:36.973 13:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:36.973 13:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:36.973 13:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 22638d5e-5e20-4190-8af5-ade5e0a21d3e -t 2000 00:07:37.233 [ 00:07:37.233 { 00:07:37.233 "name": "22638d5e-5e20-4190-8af5-ade5e0a21d3e", 00:07:37.233 "aliases": [ 00:07:37.233 "lvs/lvol" 00:07:37.233 ], 00:07:37.233 "product_name": "Logical Volume", 00:07:37.233 "block_size": 4096, 00:07:37.233 "num_blocks": 38912, 00:07:37.233 "uuid": "22638d5e-5e20-4190-8af5-ade5e0a21d3e", 00:07:37.233 "assigned_rate_limits": { 00:07:37.233 "rw_ios_per_sec": 0, 00:07:37.233 "rw_mbytes_per_sec": 0, 00:07:37.233 "r_mbytes_per_sec": 0, 00:07:37.233 "w_mbytes_per_sec": 0 00:07:37.233 }, 00:07:37.233 "claimed": false, 00:07:37.233 "zoned": false, 00:07:37.233 "supported_io_types": { 00:07:37.233 "read": true, 00:07:37.233 "write": true, 00:07:37.233 "unmap": true, 00:07:37.233 "flush": false, 00:07:37.233 "reset": true, 00:07:37.233 "nvme_admin": false, 00:07:37.233 "nvme_io": false, 00:07:37.233 "nvme_io_md": false, 00:07:37.233 "write_zeroes": true, 00:07:37.233 "zcopy": false, 00:07:37.233 "get_zone_info": false, 00:07:37.233 "zone_management": false, 00:07:37.233 "zone_append": false, 00:07:37.233 "compare": false, 00:07:37.233 "compare_and_write": false, 00:07:37.233 "abort": false, 00:07:37.233 "seek_hole": true, 00:07:37.233 "seek_data": true, 00:07:37.233 "copy": false, 00:07:37.233 "nvme_iov_md": false 00:07:37.233 }, 00:07:37.233 "driver_specific": { 00:07:37.233 "lvol": { 00:07:37.233 "lvol_store_uuid": "1391086c-46cd-49bd-a320-5dd6e946462c", 00:07:37.233 "base_bdev": "aio_bdev", 00:07:37.233 "thin_provision": false, 00:07:37.233 "num_allocated_clusters": 38, 00:07:37.233 "snapshot": false, 00:07:37.233 "clone": false, 00:07:37.233 "esnap_clone": false 00:07:37.233 } 00:07:37.233 } 00:07:37.233 } 00:07:37.233 ] 00:07:37.233 13:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:37.233 13:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1391086c-46cd-49bd-a320-5dd6e946462c 00:07:37.234 13:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:37.493 13:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:37.493 13:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1391086c-46cd-49bd-a320-5dd6e946462c 00:07:37.493 13:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:37.493 13:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:37.493 13:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 22638d5e-5e20-4190-8af5-ade5e0a21d3e 00:07:37.753 13:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1391086c-46cd-49bd-a320-5dd6e946462c 00:07:37.753 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:38.014 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:38.014 00:07:38.014 real 0m15.956s 00:07:38.014 user 0m15.698s 00:07:38.014 sys 0m1.386s 00:07:38.014 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:38.014 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:38.014 ************************************ 00:07:38.014 END TEST lvs_grow_clean 00:07:38.014 ************************************ 00:07:38.014 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:38.014 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:38.014 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:38.014 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:38.014 ************************************ 00:07:38.014 START TEST lvs_grow_dirty 00:07:38.014 ************************************ 00:07:38.014 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:38.014 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:38.014 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:38.014 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:38.014 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:38.014 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:38.014 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:38.014 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:38.014 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:38.014 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:38.274 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:38.274 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:38.533 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6a4d9db9-0091-4fde-864d-d784ae6e18f3 00:07:38.533 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:38.533 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a4d9db9-0091-4fde-864d-d784ae6e18f3 00:07:38.793 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:38.793 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:38.793 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6a4d9db9-0091-4fde-864d-d784ae6e18f3 lvol 150 00:07:38.793 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=65ec8c14-1aeb-440b-b55a-18a0e667d860 00:07:38.793 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:38.793 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:39.054 [2024-11-06 13:49:25.139285] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:39.054 [2024-11-06 13:49:25.139325] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:39.054 true 00:07:39.054 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a4d9db9-0091-4fde-864d-d784ae6e18f3 00:07:39.054 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:39.054 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:39.054 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:39.313 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 65ec8c14-1aeb-440b-b55a-18a0e667d860 00:07:39.573 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:39.573 [2024-11-06 13:49:25.781130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.573 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:39.833 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2228147 00:07:39.833 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:39.833 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:39.833 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2228147 /var/tmp/bdevperf.sock 00:07:39.833 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2228147 ']' 00:07:39.833 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:39.833 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:39.833 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:39.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:39.833 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:39.833 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:39.833 [2024-11-06 13:49:26.013226] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:07:39.833 [2024-11-06 13:49:26.013278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228147 ] 00:07:39.833 [2024-11-06 13:49:26.098276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.092 [2024-11-06 13:49:26.128212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.661 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:40.661 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:40.661 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:40.921 Nvme0n1 00:07:40.921 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:41.181 [ 00:07:41.181 { 00:07:41.181 "name": "Nvme0n1", 00:07:41.181 "aliases": [ 00:07:41.181 "65ec8c14-1aeb-440b-b55a-18a0e667d860" 00:07:41.181 ], 00:07:41.181 "product_name": "NVMe disk", 00:07:41.181 "block_size": 4096, 00:07:41.181 "num_blocks": 38912, 00:07:41.181 "uuid": "65ec8c14-1aeb-440b-b55a-18a0e667d860", 00:07:41.181 "numa_id": 0, 00:07:41.181 "assigned_rate_limits": { 00:07:41.181 "rw_ios_per_sec": 0, 00:07:41.181 "rw_mbytes_per_sec": 0, 00:07:41.181 "r_mbytes_per_sec": 0, 00:07:41.181 "w_mbytes_per_sec": 0 00:07:41.181 }, 00:07:41.181 "claimed": false, 00:07:41.181 "zoned": false, 00:07:41.181 "supported_io_types": { 00:07:41.181 "read": true, 00:07:41.181 "write": true, 00:07:41.181 "unmap": true, 00:07:41.181 "flush": true, 00:07:41.181 "reset": true, 00:07:41.181 "nvme_admin": true, 00:07:41.181 "nvme_io": true, 00:07:41.181 "nvme_io_md": false, 00:07:41.181 "write_zeroes": true, 00:07:41.181 "zcopy": false, 00:07:41.181 "get_zone_info": false, 00:07:41.181 "zone_management": false, 00:07:41.181 "zone_append": false, 00:07:41.181 "compare": true, 00:07:41.181 "compare_and_write": true, 00:07:41.181 "abort": true, 00:07:41.181 "seek_hole": false, 00:07:41.181 "seek_data": false, 00:07:41.181 "copy": true, 00:07:41.181 "nvme_iov_md": false 00:07:41.181 }, 00:07:41.181 "memory_domains": [ 00:07:41.181 { 00:07:41.181 "dma_device_id": "system", 00:07:41.181 "dma_device_type": 1 00:07:41.181 } 00:07:41.181 ], 00:07:41.181 "driver_specific": { 00:07:41.181 "nvme": [ 00:07:41.181 { 00:07:41.181 "trid": { 00:07:41.181 "trtype": "TCP", 00:07:41.181 "adrfam": "IPv4", 00:07:41.181 "traddr": "10.0.0.2", 00:07:41.181 "trsvcid": "4420", 00:07:41.181 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:41.181 }, 00:07:41.181 "ctrlr_data": { 00:07:41.181 "cntlid": 1, 00:07:41.181 "vendor_id": "0x8086", 00:07:41.181 "model_number": "SPDK bdev Controller", 00:07:41.181 "serial_number": "SPDK0", 00:07:41.181 "firmware_revision": "25.01", 00:07:41.181 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:41.181 "oacs": { 00:07:41.181 "security": 0, 00:07:41.181 "format": 0, 00:07:41.181 "firmware": 0, 00:07:41.181 "ns_manage": 0 00:07:41.181 }, 00:07:41.181 "multi_ctrlr": true, 00:07:41.181 "ana_reporting": false 00:07:41.181 }, 00:07:41.181 "vs": { 00:07:41.181 "nvme_version": "1.3" 00:07:41.181 }, 00:07:41.181 "ns_data": { 00:07:41.181 "id": 1, 00:07:41.181 "can_share": true 00:07:41.181 } 00:07:41.181 } 00:07:41.181 ], 00:07:41.181 "mp_policy": "active_passive" 00:07:41.181 } 00:07:41.181 } 00:07:41.181 ] 00:07:41.181 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2228280 00:07:41.181 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:41.181 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:41.181 Running I/O for 10 seconds... 00:07:42.120 Latency(us) 00:07:42.120 [2024-11-06T12:49:28.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.120 Nvme0n1 : 1.00 25064.00 97.91 0.00 0.00 0.00 0.00 0.00 00:07:42.120 [2024-11-06T12:49:28.400Z] =================================================================================================================== 00:07:42.120 [2024-11-06T12:49:28.400Z] Total : 25064.00 97.91 0.00 0.00 0.00 0.00 0.00 00:07:42.120 00:07:43.059 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6a4d9db9-0091-4fde-864d-d784ae6e18f3 00:07:43.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.059 Nvme0n1 : 2.00 25259.00 98.67 0.00 0.00 0.00 0.00 0.00 00:07:43.059 [2024-11-06T12:49:29.339Z] =================================================================================================================== 00:07:43.059 [2024-11-06T12:49:29.339Z] Total : 25259.00 98.67 0.00 0.00 0.00 0.00 0.00 00:07:43.059 00:07:43.319 true 00:07:43.319 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a4d9db9-0091-4fde-864d-d784ae6e18f3 00:07:43.319 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:43.577 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:43.577 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:43.577 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2228280 00:07:44.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.146 Nvme0n1 : 3.00 25329.00 98.94 0.00 0.00 0.00 0.00 0.00 00:07:44.146 [2024-11-06T12:49:30.426Z] =================================================================================================================== 00:07:44.146 [2024-11-06T12:49:30.426Z] Total : 25329.00 98.94 0.00 0.00 0.00 0.00 0.00 00:07:44.146 00:07:45.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.086 Nvme0n1 : 4.00 25380.75 99.14 0.00 0.00 0.00 0.00 0.00 00:07:45.086 [2024-11-06T12:49:31.366Z] =================================================================================================================== 00:07:45.086 [2024-11-06T12:49:31.366Z] Total : 25380.75 99.14 0.00 0.00 0.00 0.00 0.00 00:07:45.086 00:07:46.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.467 Nvme0n1 : 5.00 25415.00 99.28 0.00 0.00 0.00 0.00 0.00 00:07:46.467 [2024-11-06T12:49:32.747Z] =================================================================================================================== 00:07:46.467 [2024-11-06T12:49:32.747Z] Total : 25415.00 99.28 0.00 0.00 0.00 0.00 0.00 00:07:46.467 00:07:47.407 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.407 Nvme0n1 : 6.00 25453.33 99.43 0.00 0.00 0.00 0.00 0.00 00:07:47.407 [2024-11-06T12:49:33.687Z] =================================================================================================================== 00:07:47.407 [2024-11-06T12:49:33.687Z] Total : 25453.33 99.43 0.00 0.00 0.00 0.00 0.00 00:07:47.407 00:07:48.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.348 Nvme0n1 : 7.00 25474.29 99.51 0.00 0.00 0.00 0.00 0.00 00:07:48.348 [2024-11-06T12:49:34.628Z] =================================================================================================================== 00:07:48.348 [2024-11-06T12:49:34.628Z] Total : 25474.29 99.51 0.00 0.00 0.00 0.00 0.00 00:07:48.348 00:07:49.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.287 Nvme0n1 : 8.00 25497.75 99.60 0.00 0.00 0.00 0.00 0.00 00:07:49.287 [2024-11-06T12:49:35.567Z] =================================================================================================================== 00:07:49.287 [2024-11-06T12:49:35.567Z] Total : 25497.75 99.60 0.00 0.00 0.00 0.00 0.00 00:07:49.287 00:07:50.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.227 Nvme0n1 : 9.00 25509.11 99.64 0.00 0.00 0.00 0.00 0.00 00:07:50.227 [2024-11-06T12:49:36.507Z] =================================================================================================================== 00:07:50.227 [2024-11-06T12:49:36.507Z] Total : 25509.11 99.64 0.00 0.00 0.00 0.00 0.00 00:07:50.227 00:07:51.213 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.213 Nvme0n1 : 10.00 25524.60 99.71 0.00 0.00 0.00 0.00 0.00 00:07:51.213 [2024-11-06T12:49:37.493Z] =================================================================================================================== 00:07:51.213 [2024-11-06T12:49:37.493Z] Total : 25524.60 99.71 0.00 0.00 0.00 0.00 0.00 00:07:51.213 00:07:51.213 00:07:51.213 Latency(us) 00:07:51.213 [2024-11-06T12:49:37.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.213 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.213 Nvme0n1 : 10.00 25521.98 99.70 0.00 0.00 5012.03 2976.43 12779.52 00:07:51.213 [2024-11-06T12:49:37.493Z] =================================================================================================================== 00:07:51.213 [2024-11-06T12:49:37.493Z] Total : 25521.98 99.70 0.00 0.00 5012.03 2976.43 12779.52 00:07:51.213 { 00:07:51.213 "results": [ 00:07:51.213 { 00:07:51.213 "job": "Nvme0n1", 00:07:51.213 "core_mask": "0x2", 00:07:51.213 "workload": "randwrite", 00:07:51.213 "status": "finished", 00:07:51.213 "queue_depth": 128, 00:07:51.213 "io_size": 4096, 00:07:51.213 "runtime": 10.003495, 00:07:51.213 "iops": 25521.980067966248, 00:07:51.213 "mibps": 99.69523464049315, 00:07:51.213 "io_failed": 0, 00:07:51.213 "io_timeout": 0, 00:07:51.213 "avg_latency_us": 5012.028989290103, 00:07:51.213 "min_latency_us": 2976.4266666666667, 00:07:51.213 "max_latency_us": 12779.52 00:07:51.213 } 00:07:51.213 ], 00:07:51.213 "core_count": 1 00:07:51.213 } 00:07:51.213 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2228147 00:07:51.213 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 2228147 ']' 00:07:51.213 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 2228147 00:07:51.213 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:07:51.213 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:51.213 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2228147 00:07:51.213 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:51.213 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:51.213 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2228147' 00:07:51.213 killing process with pid 2228147 00:07:51.213 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 2228147 00:07:51.213 Received shutdown signal, test time was about 10.000000 seconds 00:07:51.213 00:07:51.213 Latency(us) 00:07:51.213 [2024-11-06T12:49:37.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.213 [2024-11-06T12:49:37.493Z] =================================================================================================================== 00:07:51.213 [2024-11-06T12:49:37.493Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:51.213 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 2228147 00:07:51.474 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:51.474 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:51.735 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a4d9db9-0091-4fde-864d-d784ae6e18f3 00:07:51.735 13:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:51.995 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:51.995 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:51.995 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2224324 00:07:51.995 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2224324 00:07:51.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2224324 Killed "${NVMF_APP[@]}" "$@" 00:07:51.995 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:51.995 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:51.995 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:51.995 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:51.995 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:51.995 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2230520 00:07:51.995 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:51.995 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2230520 00:07:51.995 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2230520 ']' 00:07:51.995 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.995 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:51.996 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.996 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:51.996 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:51.996 [2024-11-06 13:49:38.161185] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:07:51.996 [2024-11-06 13:49:38.161239] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.996 [2024-11-06 13:49:38.252045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.256 [2024-11-06 13:49:38.280984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.256 [2024-11-06 13:49:38.281011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.256 [2024-11-06 13:49:38.281017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.256 [2024-11-06 13:49:38.281021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.256 [2024-11-06 13:49:38.281025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.256 [2024-11-06 13:49:38.281449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.827 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:52.827 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:52.827 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:52.828 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:52.828 13:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:52.828 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.828 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:53.088 [2024-11-06 13:49:39.159896] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:53.088 [2024-11-06 13:49:39.159965] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:53.088 [2024-11-06 13:49:39.159987] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:53.088 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:53.088 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 65ec8c14-1aeb-440b-b55a-18a0e667d860 00:07:53.088 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=65ec8c14-1aeb-440b-b55a-18a0e667d860 00:07:53.088 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:53.088 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:53.088 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:53.088 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:53.088 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:53.088 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 65ec8c14-1aeb-440b-b55a-18a0e667d860 -t 2000 00:07:53.349 [ 00:07:53.349 { 00:07:53.349 "name": "65ec8c14-1aeb-440b-b55a-18a0e667d860", 00:07:53.349 "aliases": [ 00:07:53.349 "lvs/lvol" 00:07:53.349 ], 00:07:53.349 "product_name": "Logical Volume", 00:07:53.349 "block_size": 4096, 00:07:53.349 "num_blocks": 38912, 00:07:53.349 "uuid": "65ec8c14-1aeb-440b-b55a-18a0e667d860", 00:07:53.349 "assigned_rate_limits": { 00:07:53.349 "rw_ios_per_sec": 0, 00:07:53.349 "rw_mbytes_per_sec": 0, 00:07:53.349 "r_mbytes_per_sec": 0, 00:07:53.349 "w_mbytes_per_sec": 0 00:07:53.349 }, 00:07:53.349 "claimed": false, 00:07:53.349 "zoned": false, 00:07:53.349 "supported_io_types": { 00:07:53.349 "read": true, 00:07:53.349 "write": true, 00:07:53.349 "unmap": true, 00:07:53.349 "flush": false, 00:07:53.349 "reset": true, 00:07:53.349 "nvme_admin": false, 00:07:53.349 "nvme_io": false, 00:07:53.349 "nvme_io_md": false, 00:07:53.349 "write_zeroes": true, 00:07:53.349 "zcopy": false, 00:07:53.349 "get_zone_info": false, 00:07:53.349 "zone_management": false, 00:07:53.349 "zone_append": false, 00:07:53.349 "compare": false, 00:07:53.349 "compare_and_write": false, 00:07:53.349 "abort": false, 00:07:53.349 "seek_hole": true, 00:07:53.349 "seek_data": true, 00:07:53.349 "copy": false, 00:07:53.349 "nvme_iov_md": false 00:07:53.349 }, 00:07:53.349 "driver_specific": { 00:07:53.349 "lvol": { 00:07:53.349 "lvol_store_uuid": "6a4d9db9-0091-4fde-864d-d784ae6e18f3", 00:07:53.349 "base_bdev": "aio_bdev", 00:07:53.349 "thin_provision": false, 00:07:53.349 "num_allocated_clusters": 38, 00:07:53.349 "snapshot": false, 00:07:53.349 "clone": false, 00:07:53.349 "esnap_clone": false 00:07:53.349 } 00:07:53.349 } 00:07:53.349 } 00:07:53.349 ] 00:07:53.349 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:53.349 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a4d9db9-0091-4fde-864d-d784ae6e18f3 00:07:53.349 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:53.609 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:53.609 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a4d9db9-0091-4fde-864d-d784ae6e18f3 00:07:53.609 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:53.610 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:53.610 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:53.871 [2024-11-06 13:49:40.000541] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:53.871 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a4d9db9-0091-4fde-864d-d784ae6e18f3 00:07:53.871 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:53.871 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a4d9db9-0091-4fde-864d-d784ae6e18f3 00:07:53.871 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.871 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.871 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.871 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.871 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.871 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.871 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.871 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:53.871 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a4d9db9-0091-4fde-864d-d784ae6e18f3 00:07:54.132 request: 00:07:54.132 { 00:07:54.132 "uuid": "6a4d9db9-0091-4fde-864d-d784ae6e18f3", 00:07:54.132 "method": "bdev_lvol_get_lvstores", 00:07:54.132 "req_id": 1 00:07:54.132 } 00:07:54.132 Got JSON-RPC error response 00:07:54.132 response: 00:07:54.132 { 00:07:54.132 "code": -19, 00:07:54.132 "message": "No such device" 00:07:54.132 } 00:07:54.132 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:54.132 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:54.132 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:54.132 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:54.132 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:54.132 aio_bdev 00:07:54.393 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 65ec8c14-1aeb-440b-b55a-18a0e667d860 00:07:54.393 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=65ec8c14-1aeb-440b-b55a-18a0e667d860 00:07:54.393 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:54.393 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:54.393 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:54.394 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:54.394 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:54.394 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 65ec8c14-1aeb-440b-b55a-18a0e667d860 -t 2000 00:07:54.655 [ 00:07:54.655 { 00:07:54.655 "name": "65ec8c14-1aeb-440b-b55a-18a0e667d860", 00:07:54.655 "aliases": [ 00:07:54.655 "lvs/lvol" 00:07:54.655 ], 00:07:54.655 "product_name": "Logical Volume", 00:07:54.655 "block_size": 4096, 00:07:54.655 "num_blocks": 38912, 00:07:54.655 "uuid": "65ec8c14-1aeb-440b-b55a-18a0e667d860", 00:07:54.655 "assigned_rate_limits": { 00:07:54.655 "rw_ios_per_sec": 0, 00:07:54.655 "rw_mbytes_per_sec": 0, 00:07:54.655 "r_mbytes_per_sec": 0, 00:07:54.655 "w_mbytes_per_sec": 0 00:07:54.655 }, 00:07:54.655 "claimed": false, 00:07:54.655 "zoned": false, 00:07:54.655 "supported_io_types": { 00:07:54.655 "read": true, 00:07:54.655 "write": true, 00:07:54.655 "unmap": true, 00:07:54.655 "flush": false, 00:07:54.655 "reset": true, 00:07:54.655 "nvme_admin": false, 00:07:54.655 "nvme_io": false, 00:07:54.655 "nvme_io_md": false, 00:07:54.655 "write_zeroes": true, 00:07:54.655 "zcopy": false, 00:07:54.655 "get_zone_info": false, 00:07:54.655 "zone_management": false, 00:07:54.655 "zone_append": false, 00:07:54.655 "compare": false, 00:07:54.655 "compare_and_write": false, 00:07:54.655 "abort": false, 00:07:54.655 "seek_hole": true, 00:07:54.655 "seek_data": true, 00:07:54.655 "copy": false, 00:07:54.655 "nvme_iov_md": false 00:07:54.655 }, 00:07:54.655 "driver_specific": { 00:07:54.655 "lvol": { 00:07:54.655 "lvol_store_uuid": "6a4d9db9-0091-4fde-864d-d784ae6e18f3", 00:07:54.655 "base_bdev": "aio_bdev", 00:07:54.655 "thin_provision": false, 00:07:54.655 "num_allocated_clusters": 38, 00:07:54.655 "snapshot": false, 00:07:54.655 "clone": false, 00:07:54.655 "esnap_clone": false 00:07:54.655 } 00:07:54.655 } 00:07:54.655 } 00:07:54.655 ] 00:07:54.655 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:54.655 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a4d9db9-0091-4fde-864d-d784ae6e18f3 00:07:54.655 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:54.916 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:54.916 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a4d9db9-0091-4fde-864d-d784ae6e18f3 00:07:54.916 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:54.916 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:54.916 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 65ec8c14-1aeb-440b-b55a-18a0e667d860 00:07:55.176 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6a4d9db9-0091-4fde-864d-d784ae6e18f3 00:07:55.436 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:55.436 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:55.436 00:07:55.436 real 0m17.406s 00:07:55.436 user 0m45.794s 00:07:55.436 sys 0m2.911s 00:07:55.436 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:55.436 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:55.436 ************************************ 00:07:55.436 END TEST lvs_grow_dirty 00:07:55.436 ************************************ 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:55.696 nvmf_trace.0 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:55.696 rmmod nvme_tcp 00:07:55.696 rmmod nvme_fabrics 00:07:55.696 rmmod nvme_keyring 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2230520 ']' 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2230520 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 2230520 ']' 00:07:55.696 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 2230520 00:07:55.697 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:07:55.697 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:55.697 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2230520 00:07:55.697 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:55.697 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:55.697 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2230520' 00:07:55.697 killing process with pid 2230520 00:07:55.697 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 2230520 00:07:55.697 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 2230520 00:07:55.957 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:55.957 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:55.957 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:55.957 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:55.957 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:55.957 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:55.957 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:55.957 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:55.957 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:55.957 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.957 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.957 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.869 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:57.869 00:07:57.869 real 0m44.829s 00:07:57.869 user 1m8.048s 00:07:57.869 sys 0m10.377s 00:07:57.869 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:57.869 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:57.869 ************************************ 00:07:57.869 END TEST nvmf_lvs_grow 00:07:57.869 ************************************ 00:07:57.869 13:49:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:57.869 13:49:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:57.869 13:49:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:57.869 13:49:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.130 ************************************ 00:07:58.130 START TEST nvmf_bdev_io_wait 00:07:58.130 ************************************ 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:58.130 * Looking for test storage... 00:07:58.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:58.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.130 --rc genhtml_branch_coverage=1 00:07:58.130 --rc genhtml_function_coverage=1 00:07:58.130 --rc genhtml_legend=1 00:07:58.130 --rc geninfo_all_blocks=1 00:07:58.130 --rc geninfo_unexecuted_blocks=1 00:07:58.130 00:07:58.130 ' 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:58.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.130 --rc genhtml_branch_coverage=1 00:07:58.130 --rc genhtml_function_coverage=1 00:07:58.130 --rc genhtml_legend=1 00:07:58.130 --rc geninfo_all_blocks=1 00:07:58.130 --rc geninfo_unexecuted_blocks=1 00:07:58.130 00:07:58.130 ' 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:58.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.130 --rc genhtml_branch_coverage=1 00:07:58.130 --rc genhtml_function_coverage=1 00:07:58.130 --rc genhtml_legend=1 00:07:58.130 --rc geninfo_all_blocks=1 00:07:58.130 --rc geninfo_unexecuted_blocks=1 00:07:58.130 00:07:58.130 ' 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:58.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.130 --rc genhtml_branch_coverage=1 00:07:58.130 --rc genhtml_function_coverage=1 00:07:58.130 --rc genhtml_legend=1 00:07:58.130 --rc geninfo_all_blocks=1 00:07:58.130 --rc geninfo_unexecuted_blocks=1 00:07:58.130 00:07:58.130 ' 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.130 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.131 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.416 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.416 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:58.416 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:58.416 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:58.416 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:58.416 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.416 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:58.416 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:58.416 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:58.416 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.416 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.416 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.416 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:58.416 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:58.416 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:58.416 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:06.674 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:06.674 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:06.674 Found net devices under 0000:31:00.0: cvl_0_0 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:06.674 Found net devices under 0000:31:00.1: cvl_0_1 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:06.674 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:06.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:08:06.675 00:08:06.675 --- 10.0.0.2 ping statistics --- 00:08:06.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.675 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:08:06.675 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:08:06.675 00:08:06.675 --- 10.0.0.1 ping statistics --- 00:08:06.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.675 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:08:06.675 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.675 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:06.675 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:06.675 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.675 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:06.675 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:06.675 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.675 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:06.675 13:49:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:06.675 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:06.675 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:06.675 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:06.675 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.675 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2235637 00:08:06.675 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2235637 00:08:06.675 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:06.675 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 2235637 ']' 00:08:06.675 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.675 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:06.675 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.675 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:06.675 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.675 [2024-11-06 13:49:52.089977] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:08:06.675 [2024-11-06 13:49:52.090041] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.675 [2024-11-06 13:49:52.191922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.675 [2024-11-06 13:49:52.246327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.675 [2024-11-06 13:49:52.246383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.675 [2024-11-06 13:49:52.246396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.675 [2024-11-06 13:49:52.246404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.675 [2024-11-06 13:49:52.246410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.675 [2024-11-06 13:49:52.248833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.675 [2024-11-06 13:49:52.248993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.675 [2024-11-06 13:49:52.249156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.675 [2024-11-06 13:49:52.249157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.675 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:06.675 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:08:06.675 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:06.675 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:06.675 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.937 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.937 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:06.937 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.937 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.937 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.937 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:06.937 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.937 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.937 [2024-11-06 13:49:53.040994] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.937 Malloc0 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.937 [2024-11-06 13:49:53.106682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2235747 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2235749 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:06.937 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:06.937 { 00:08:06.937 "params": { 00:08:06.937 "name": "Nvme$subsystem", 00:08:06.937 "trtype": "$TEST_TRANSPORT", 00:08:06.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:06.938 "adrfam": "ipv4", 00:08:06.938 "trsvcid": "$NVMF_PORT", 00:08:06.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:06.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:06.938 "hdgst": ${hdgst:-false}, 00:08:06.938 "ddgst": ${ddgst:-false} 00:08:06.938 }, 00:08:06.938 "method": "bdev_nvme_attach_controller" 00:08:06.938 } 00:08:06.938 EOF 00:08:06.938 )") 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2235752 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:06.938 { 00:08:06.938 "params": { 00:08:06.938 "name": "Nvme$subsystem", 00:08:06.938 "trtype": "$TEST_TRANSPORT", 00:08:06.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:06.938 "adrfam": "ipv4", 00:08:06.938 "trsvcid": "$NVMF_PORT", 00:08:06.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:06.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:06.938 "hdgst": ${hdgst:-false}, 00:08:06.938 "ddgst": ${ddgst:-false} 00:08:06.938 }, 00:08:06.938 "method": "bdev_nvme_attach_controller" 00:08:06.938 } 00:08:06.938 EOF 00:08:06.938 )") 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2235756 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:06.938 { 00:08:06.938 "params": { 00:08:06.938 "name": "Nvme$subsystem", 00:08:06.938 "trtype": "$TEST_TRANSPORT", 00:08:06.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:06.938 "adrfam": "ipv4", 00:08:06.938 "trsvcid": "$NVMF_PORT", 00:08:06.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:06.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:06.938 "hdgst": ${hdgst:-false}, 00:08:06.938 "ddgst": ${ddgst:-false} 00:08:06.938 }, 00:08:06.938 "method": "bdev_nvme_attach_controller" 00:08:06.938 } 00:08:06.938 EOF 00:08:06.938 )") 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:06.938 { 00:08:06.938 "params": { 00:08:06.938 "name": "Nvme$subsystem", 00:08:06.938 "trtype": "$TEST_TRANSPORT", 00:08:06.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:06.938 "adrfam": "ipv4", 00:08:06.938 "trsvcid": "$NVMF_PORT", 00:08:06.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:06.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:06.938 "hdgst": ${hdgst:-false}, 00:08:06.938 "ddgst": ${ddgst:-false} 00:08:06.938 }, 00:08:06.938 "method": "bdev_nvme_attach_controller" 00:08:06.938 } 00:08:06.938 EOF 00:08:06.938 )") 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2235747 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:06.938 "params": { 00:08:06.938 "name": "Nvme1", 00:08:06.938 "trtype": "tcp", 00:08:06.938 "traddr": "10.0.0.2", 00:08:06.938 "adrfam": "ipv4", 00:08:06.938 "trsvcid": "4420", 00:08:06.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:06.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:06.938 "hdgst": false, 00:08:06.938 "ddgst": false 00:08:06.938 }, 00:08:06.938 "method": "bdev_nvme_attach_controller" 00:08:06.938 }' 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:06.938 "params": { 00:08:06.938 "name": "Nvme1", 00:08:06.938 "trtype": "tcp", 00:08:06.938 "traddr": "10.0.0.2", 00:08:06.938 "adrfam": "ipv4", 00:08:06.938 "trsvcid": "4420", 00:08:06.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:06.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:06.938 "hdgst": false, 00:08:06.938 "ddgst": false 00:08:06.938 }, 00:08:06.938 "method": "bdev_nvme_attach_controller" 00:08:06.938 }' 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:06.938 "params": { 00:08:06.938 "name": "Nvme1", 00:08:06.938 "trtype": "tcp", 00:08:06.938 "traddr": "10.0.0.2", 00:08:06.938 "adrfam": "ipv4", 00:08:06.938 "trsvcid": "4420", 00:08:06.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:06.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:06.938 "hdgst": false, 00:08:06.938 "ddgst": false 00:08:06.938 }, 00:08:06.938 "method": "bdev_nvme_attach_controller" 00:08:06.938 }' 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:06.938 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:06.938 "params": { 00:08:06.938 "name": "Nvme1", 00:08:06.938 "trtype": "tcp", 00:08:06.938 "traddr": "10.0.0.2", 00:08:06.938 "adrfam": "ipv4", 00:08:06.938 "trsvcid": "4420", 00:08:06.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:06.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:06.938 "hdgst": false, 00:08:06.938 "ddgst": false 00:08:06.938 }, 00:08:06.938 "method": "bdev_nvme_attach_controller" 00:08:06.938 }' 00:08:06.938 [2024-11-06 13:49:53.165972] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:08:06.938 [2024-11-06 13:49:53.166034] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:06.938 [2024-11-06 13:49:53.167654] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:08:06.938 [2024-11-06 13:49:53.167680] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:08:06.938 [2024-11-06 13:49:53.167732] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:06.938 [2024-11-06 13:49:53.167743] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:06.938 [2024-11-06 13:49:53.169350] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:08:06.938 [2024-11-06 13:49:53.169416] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:07.200 [2024-11-06 13:49:53.372063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.200 [2024-11-06 13:49:53.412458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:07.200 [2024-11-06 13:49:53.462114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.461 [2024-11-06 13:49:53.501528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:07.461 [2024-11-06 13:49:53.560586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.461 [2024-11-06 13:49:53.603462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:07.461 [2024-11-06 13:49:53.629200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.461 [2024-11-06 13:49:53.667087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:07.722 Running I/O for 1 seconds... 00:08:07.722 Running I/O for 1 seconds... 00:08:07.722 Running I/O for 1 seconds... 00:08:07.722 Running I/O for 1 seconds... 00:08:08.667 12119.00 IOPS, 47.34 MiB/s 00:08:08.667 Latency(us) 00:08:08.667 [2024-11-06T12:49:54.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.667 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:08.667 Nvme1n1 : 1.01 12179.16 47.57 0.00 0.00 10474.13 5324.80 17148.59 00:08:08.667 [2024-11-06T12:49:54.947Z] =================================================================================================================== 00:08:08.667 [2024-11-06T12:49:54.947Z] Total : 12179.16 47.57 0.00 0.00 10474.13 5324.80 17148.59 00:08:08.667 9156.00 IOPS, 35.77 MiB/s 00:08:08.667 Latency(us) 00:08:08.667 [2024-11-06T12:49:54.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.668 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:08.668 Nvme1n1 : 1.01 9224.27 36.03 0.00 0.00 13819.07 6362.45 22282.24 00:08:08.668 [2024-11-06T12:49:54.948Z] =================================================================================================================== 00:08:08.668 [2024-11-06T12:49:54.948Z] Total : 9224.27 36.03 0.00 0.00 13819.07 6362.45 22282.24 00:08:08.930 10078.00 IOPS, 39.37 MiB/s [2024-11-06T12:49:55.210Z] 13:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2235749 00:08:08.930 00:08:08.930 Latency(us) 00:08:08.930 [2024-11-06T12:49:55.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.930 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:08.930 Nvme1n1 : 1.01 10137.82 39.60 0.00 0.00 12583.02 5079.04 21736.11 00:08:08.930 [2024-11-06T12:49:55.210Z] =================================================================================================================== 00:08:08.930 [2024-11-06T12:49:55.210Z] Total : 10137.82 39.60 0.00 0.00 12583.02 5079.04 21736.11 00:08:08.930 186616.00 IOPS, 728.97 MiB/s 00:08:08.930 Latency(us) 00:08:08.930 [2024-11-06T12:49:55.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.930 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:08.930 Nvme1n1 : 1.00 186241.83 727.51 0.00 0.00 683.48 300.37 1979.73 00:08:08.930 [2024-11-06T12:49:55.210Z] =================================================================================================================== 00:08:08.930 [2024-11-06T12:49:55.210Z] Total : 186241.83 727.51 0.00 0.00 683.48 300.37 1979.73 00:08:08.930 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2235752 00:08:08.930 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2235756 00:08:08.930 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:08.930 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.930 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:08.930 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.930 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:08.930 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:08.930 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:08.930 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:08.930 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:08.930 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:08.931 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:08.931 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:08.931 rmmod nvme_tcp 00:08:08.931 rmmod nvme_fabrics 00:08:08.931 rmmod nvme_keyring 00:08:08.931 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:08.931 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:08.931 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:08.931 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2235637 ']' 00:08:08.931 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2235637 00:08:08.931 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 2235637 ']' 00:08:08.931 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 2235637 00:08:08.931 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:08:08.931 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:08.931 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2235637 00:08:09.192 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:09.192 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:09.192 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2235637' 00:08:09.192 killing process with pid 2235637 00:08:09.192 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 2235637 00:08:09.192 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 2235637 00:08:09.192 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:09.192 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:09.192 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:09.192 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:09.192 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:09.192 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:09.192 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:09.192 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:09.192 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:09.192 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.192 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.192 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:11.742 00:08:11.742 real 0m13.304s 00:08:11.742 user 0m20.292s 00:08:11.742 sys 0m7.483s 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.742 ************************************ 00:08:11.742 END TEST nvmf_bdev_io_wait 00:08:11.742 ************************************ 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:11.742 ************************************ 00:08:11.742 START TEST nvmf_queue_depth 00:08:11.742 ************************************ 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:11.742 * Looking for test storage... 00:08:11.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:11.742 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:11.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.743 --rc genhtml_branch_coverage=1 00:08:11.743 --rc genhtml_function_coverage=1 00:08:11.743 --rc genhtml_legend=1 00:08:11.743 --rc geninfo_all_blocks=1 00:08:11.743 --rc geninfo_unexecuted_blocks=1 00:08:11.743 00:08:11.743 ' 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:11.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.743 --rc genhtml_branch_coverage=1 00:08:11.743 --rc genhtml_function_coverage=1 00:08:11.743 --rc genhtml_legend=1 00:08:11.743 --rc geninfo_all_blocks=1 00:08:11.743 --rc geninfo_unexecuted_blocks=1 00:08:11.743 00:08:11.743 ' 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:11.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.743 --rc genhtml_branch_coverage=1 00:08:11.743 --rc genhtml_function_coverage=1 00:08:11.743 --rc genhtml_legend=1 00:08:11.743 --rc geninfo_all_blocks=1 00:08:11.743 --rc geninfo_unexecuted_blocks=1 00:08:11.743 00:08:11.743 ' 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:11.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.743 --rc genhtml_branch_coverage=1 00:08:11.743 --rc genhtml_function_coverage=1 00:08:11.743 --rc genhtml_legend=1 00:08:11.743 --rc geninfo_all_blocks=1 00:08:11.743 --rc geninfo_unexecuted_blocks=1 00:08:11.743 00:08:11.743 ' 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:11.743 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:11.744 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.744 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:11.744 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:11.744 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:11.744 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.744 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.744 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.744 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:11.744 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:11.744 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:11.744 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:19.886 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.886 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:19.887 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:19.887 Found net devices under 0000:31:00.0: cvl_0_0 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:19.887 Found net devices under 0000:31:00.1: cvl_0_1 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:19.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:08:19.887 00:08:19.887 --- 10.0.0.2 ping statistics --- 00:08:19.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.887 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:08:19.887 00:08:19.887 --- 10.0.0.1 ping statistics --- 00:08:19.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.887 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2240501 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2240501 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2240501 ']' 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:19.887 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.887 [2024-11-06 13:50:05.514004] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:08:19.887 [2024-11-06 13:50:05.514070] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.887 [2024-11-06 13:50:05.617777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.887 [2024-11-06 13:50:05.669550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.887 [2024-11-06 13:50:05.669598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.887 [2024-11-06 13:50:05.669607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.887 [2024-11-06 13:50:05.669614] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.887 [2024-11-06 13:50:05.669620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.887 [2024-11-06 13:50:05.670394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.148 [2024-11-06 13:50:06.380552] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.148 Malloc0 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.148 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.409 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.409 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.409 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.409 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.409 [2024-11-06 13:50:06.441741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.409 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.409 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2240772 00:08:20.409 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:20.409 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:20.409 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2240772 /var/tmp/bdevperf.sock 00:08:20.409 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2240772 ']' 00:08:20.409 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:20.409 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:20.409 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:20.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:20.409 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:20.409 13:50:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.409 [2024-11-06 13:50:06.500279] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:08:20.409 [2024-11-06 13:50:06.500339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2240772 ] 00:08:20.409 [2024-11-06 13:50:06.592259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.409 [2024-11-06 13:50:06.645333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.352 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:21.352 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:21.352 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:21.352 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.352 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:21.352 NVMe0n1 00:08:21.352 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.352 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:21.352 Running I/O for 10 seconds... 00:08:23.237 8193.00 IOPS, 32.00 MiB/s [2024-11-06T12:50:10.904Z] 9608.00 IOPS, 37.53 MiB/s [2024-11-06T12:50:11.846Z] 10244.33 IOPS, 40.02 MiB/s [2024-11-06T12:50:12.786Z] 10758.00 IOPS, 42.02 MiB/s [2024-11-06T12:50:13.728Z] 11262.80 IOPS, 44.00 MiB/s [2024-11-06T12:50:14.673Z] 11603.17 IOPS, 45.32 MiB/s [2024-11-06T12:50:15.614Z] 11848.71 IOPS, 46.28 MiB/s [2024-11-06T12:50:16.557Z] 12049.12 IOPS, 47.07 MiB/s [2024-11-06T12:50:17.937Z] 12176.11 IOPS, 47.56 MiB/s [2024-11-06T12:50:17.937Z] 12291.40 IOPS, 48.01 MiB/s 00:08:31.657 Latency(us) 00:08:31.657 [2024-11-06T12:50:17.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.657 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:31.657 Verification LBA range: start 0x0 length 0x4000 00:08:31.657 NVMe0n1 : 10.04 12340.51 48.21 0.00 0.00 82710.20 9721.17 75584.85 00:08:31.657 [2024-11-06T12:50:17.937Z] =================================================================================================================== 00:08:31.657 [2024-11-06T12:50:17.937Z] Total : 12340.51 48.21 0.00 0.00 82710.20 9721.17 75584.85 00:08:31.657 { 00:08:31.657 "results": [ 00:08:31.657 { 00:08:31.657 "job": "NVMe0n1", 00:08:31.657 "core_mask": "0x1", 00:08:31.657 "workload": "verify", 00:08:31.657 "status": "finished", 00:08:31.657 "verify_range": { 00:08:31.657 "start": 0, 00:08:31.657 "length": 16384 00:08:31.657 }, 00:08:31.657 "queue_depth": 1024, 00:08:31.657 "io_size": 4096, 00:08:31.657 "runtime": 10.041721, 00:08:31.657 "iops": 12340.514140952531, 00:08:31.657 "mibps": 48.205133363095825, 00:08:31.657 "io_failed": 0, 00:08:31.657 "io_timeout": 0, 00:08:31.657 "avg_latency_us": 82710.20253238648, 00:08:31.657 "min_latency_us": 9721.173333333334, 00:08:31.657 "max_latency_us": 75584.85333333333 00:08:31.657 } 00:08:31.657 ], 00:08:31.657 "core_count": 1 00:08:31.657 } 00:08:31.657 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2240772 00:08:31.657 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2240772 ']' 00:08:31.657 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2240772 00:08:31.657 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:31.657 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2240772 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2240772' 00:08:31.658 killing process with pid 2240772 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2240772 00:08:31.658 Received shutdown signal, test time was about 10.000000 seconds 00:08:31.658 00:08:31.658 Latency(us) 00:08:31.658 [2024-11-06T12:50:17.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.658 [2024-11-06T12:50:17.938Z] =================================================================================================================== 00:08:31.658 [2024-11-06T12:50:17.938Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2240772 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:31.658 rmmod nvme_tcp 00:08:31.658 rmmod nvme_fabrics 00:08:31.658 rmmod nvme_keyring 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2240501 ']' 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2240501 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2240501 ']' 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2240501 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2240501 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2240501' 00:08:31.658 killing process with pid 2240501 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2240501 00:08:31.658 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2240501 00:08:31.918 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:31.918 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:31.918 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:31.918 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:31.918 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:31.918 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:31.918 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:31.918 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:31.918 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:31.918 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.918 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.918 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.828 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:33.828 00:08:33.828 real 0m22.501s 00:08:33.828 user 0m25.403s 00:08:33.828 sys 0m7.270s 00:08:33.828 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:33.828 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.828 ************************************ 00:08:33.828 END TEST nvmf_queue_depth 00:08:33.828 ************************************ 00:08:33.828 13:50:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:33.828 13:50:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:33.828 13:50:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:33.828 13:50:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.089 ************************************ 00:08:34.089 START TEST nvmf_target_multipath 00:08:34.089 ************************************ 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:34.089 * Looking for test storage... 00:08:34.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:34.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.089 --rc genhtml_branch_coverage=1 00:08:34.089 --rc genhtml_function_coverage=1 00:08:34.089 --rc genhtml_legend=1 00:08:34.089 --rc geninfo_all_blocks=1 00:08:34.089 --rc geninfo_unexecuted_blocks=1 00:08:34.089 00:08:34.089 ' 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:34.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.089 --rc genhtml_branch_coverage=1 00:08:34.089 --rc genhtml_function_coverage=1 00:08:34.089 --rc genhtml_legend=1 00:08:34.089 --rc geninfo_all_blocks=1 00:08:34.089 --rc geninfo_unexecuted_blocks=1 00:08:34.089 00:08:34.089 ' 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:34.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.089 --rc genhtml_branch_coverage=1 00:08:34.089 --rc genhtml_function_coverage=1 00:08:34.089 --rc genhtml_legend=1 00:08:34.089 --rc geninfo_all_blocks=1 00:08:34.089 --rc geninfo_unexecuted_blocks=1 00:08:34.089 00:08:34.089 ' 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:34.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.089 --rc genhtml_branch_coverage=1 00:08:34.089 --rc genhtml_function_coverage=1 00:08:34.089 --rc genhtml_legend=1 00:08:34.089 --rc geninfo_all_blocks=1 00:08:34.089 --rc geninfo_unexecuted_blocks=1 00:08:34.089 00:08:34.089 ' 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.089 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:34.090 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.090 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.090 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.090 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.090 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.090 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.090 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.090 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.090 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.090 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.090 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:34.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:34.352 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:42.499 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.499 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:42.499 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:42.500 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:42.500 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:42.500 Found net devices under 0000:31:00.0: cvl_0_0 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:42.500 Found net devices under 0000:31:00.1: cvl_0_1 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:42.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:08:42.500 00:08:42.500 --- 10.0.0.2 ping statistics --- 00:08:42.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.500 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:08:42.500 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:08:42.500 00:08:42.500 --- 10.0.0.1 ping statistics --- 00:08:42.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.500 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:08:42.501 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.501 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:42.501 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:42.501 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.501 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:42.501 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:42.501 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.501 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:42.501 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:42.501 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:42.501 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:42.501 only one NIC for nvmf test 00:08:42.501 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:42.501 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.501 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:42.501 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:42.501 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:42.501 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.501 13:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:42.501 rmmod nvme_tcp 00:08:42.501 rmmod nvme_fabrics 00:08:42.501 rmmod nvme_keyring 00:08:42.501 13:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.501 13:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:42.501 13:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:42.501 13:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:42.501 13:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:42.501 13:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:42.501 13:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:42.501 13:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:42.501 13:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:42.501 13:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:42.501 13:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:42.501 13:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:42.501 13:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:42.501 13:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.501 13:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.501 13:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:43.886 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:44.147 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:44.147 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:44.147 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.147 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.147 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.147 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:44.147 00:08:44.147 real 0m10.033s 00:08:44.147 user 0m2.191s 00:08:44.147 sys 0m5.775s 00:08:44.147 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:44.147 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:44.147 ************************************ 00:08:44.147 END TEST nvmf_target_multipath 00:08:44.147 ************************************ 00:08:44.147 13:50:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:44.147 13:50:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:44.147 13:50:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:44.147 13:50:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.147 ************************************ 00:08:44.147 START TEST nvmf_zcopy 00:08:44.147 ************************************ 00:08:44.147 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:44.147 * Looking for test storage... 00:08:44.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.147 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:44.147 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:44.147 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:44.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.409 --rc genhtml_branch_coverage=1 00:08:44.409 --rc genhtml_function_coverage=1 00:08:44.409 --rc genhtml_legend=1 00:08:44.409 --rc geninfo_all_blocks=1 00:08:44.409 --rc geninfo_unexecuted_blocks=1 00:08:44.409 00:08:44.409 ' 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:44.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.409 --rc genhtml_branch_coverage=1 00:08:44.409 --rc genhtml_function_coverage=1 00:08:44.409 --rc genhtml_legend=1 00:08:44.409 --rc geninfo_all_blocks=1 00:08:44.409 --rc geninfo_unexecuted_blocks=1 00:08:44.409 00:08:44.409 ' 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:44.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.409 --rc genhtml_branch_coverage=1 00:08:44.409 --rc genhtml_function_coverage=1 00:08:44.409 --rc genhtml_legend=1 00:08:44.409 --rc geninfo_all_blocks=1 00:08:44.409 --rc geninfo_unexecuted_blocks=1 00:08:44.409 00:08:44.409 ' 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:44.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.409 --rc genhtml_branch_coverage=1 00:08:44.409 --rc genhtml_function_coverage=1 00:08:44.409 --rc genhtml_legend=1 00:08:44.409 --rc geninfo_all_blocks=1 00:08:44.409 --rc geninfo_unexecuted_blocks=1 00:08:44.409 00:08:44.409 ' 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:44.409 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:44.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:44.410 13:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:52.555 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:52.555 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:52.555 Found net devices under 0000:31:00.0: cvl_0_0 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.555 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:52.556 Found net devices under 0000:31:00.1: cvl_0_1 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:52.556 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:52.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:08:52.556 00:08:52.556 --- 10.0.0.2 ping statistics --- 00:08:52.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.556 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:08:52.556 00:08:52.556 --- 10.0.0.1 ping statistics --- 00:08:52.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.556 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2251538 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2251538 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 2251538 ']' 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:52.556 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.556 [2024-11-06 13:50:38.210972] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:08:52.556 [2024-11-06 13:50:38.211035] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.556 [2024-11-06 13:50:38.311569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.556 [2024-11-06 13:50:38.361321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.556 [2024-11-06 13:50:38.361371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.556 [2024-11-06 13:50:38.361379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.556 [2024-11-06 13:50:38.361387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.556 [2024-11-06 13:50:38.361393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.556 [2024-11-06 13:50:38.362188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.818 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:52.818 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:52.818 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:52.818 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:52.818 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.818 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.818 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:52.818 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:52.818 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.818 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.818 [2024-11-06 13:50:39.092937] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.080 [2024-11-06 13:50:39.117245] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.080 malloc0 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:53.080 { 00:08:53.080 "params": { 00:08:53.080 "name": "Nvme$subsystem", 00:08:53.080 "trtype": "$TEST_TRANSPORT", 00:08:53.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.080 "adrfam": "ipv4", 00:08:53.080 "trsvcid": "$NVMF_PORT", 00:08:53.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.080 "hdgst": ${hdgst:-false}, 00:08:53.080 "ddgst": ${ddgst:-false} 00:08:53.080 }, 00:08:53.080 "method": "bdev_nvme_attach_controller" 00:08:53.080 } 00:08:53.080 EOF 00:08:53.080 )") 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:53.080 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:53.080 "params": { 00:08:53.080 "name": "Nvme1", 00:08:53.080 "trtype": "tcp", 00:08:53.080 "traddr": "10.0.0.2", 00:08:53.080 "adrfam": "ipv4", 00:08:53.080 "trsvcid": "4420", 00:08:53.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.080 "hdgst": false, 00:08:53.080 "ddgst": false 00:08:53.080 }, 00:08:53.080 "method": "bdev_nvme_attach_controller" 00:08:53.080 }' 00:08:53.080 [2024-11-06 13:50:39.219822] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:08:53.080 [2024-11-06 13:50:39.219889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251785 ] 00:08:53.080 [2024-11-06 13:50:39.313349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.341 [2024-11-06 13:50:39.366337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.603 Running I/O for 10 seconds... 00:08:55.489 6452.00 IOPS, 50.41 MiB/s [2024-11-06T12:50:42.710Z] 7900.00 IOPS, 61.72 MiB/s [2024-11-06T12:50:44.094Z] 8509.00 IOPS, 66.48 MiB/s [2024-11-06T12:50:45.036Z] 8816.00 IOPS, 68.88 MiB/s [2024-11-06T12:50:45.976Z] 8998.80 IOPS, 70.30 MiB/s [2024-11-06T12:50:46.916Z] 9115.50 IOPS, 71.21 MiB/s [2024-11-06T12:50:47.857Z] 9202.00 IOPS, 71.89 MiB/s [2024-11-06T12:50:48.847Z] 9267.88 IOPS, 72.41 MiB/s [2024-11-06T12:50:49.840Z] 9318.78 IOPS, 72.80 MiB/s [2024-11-06T12:50:49.840Z] 9359.20 IOPS, 73.12 MiB/s 00:09:03.560 Latency(us) 00:09:03.560 [2024-11-06T12:50:49.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.560 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:03.560 Verification LBA range: start 0x0 length 0x1000 00:09:03.560 Nvme1n1 : 10.05 9322.36 72.83 0.00 0.00 13638.94 2635.09 45001.39 00:09:03.560 [2024-11-06T12:50:49.840Z] =================================================================================================================== 00:09:03.560 [2024-11-06T12:50:49.840Z] Total : 9322.36 72.83 0.00 0.00 13638.94 2635.09 45001.39 00:09:03.819 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2253910 00:09:03.819 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:03.819 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.819 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:03.819 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:03.819 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:03.819 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:03.819 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:03.819 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:03.819 { 00:09:03.819 "params": { 00:09:03.819 "name": "Nvme$subsystem", 00:09:03.819 "trtype": "$TEST_TRANSPORT", 00:09:03.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.819 "adrfam": "ipv4", 00:09:03.819 "trsvcid": "$NVMF_PORT", 00:09:03.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.819 "hdgst": ${hdgst:-false}, 00:09:03.819 "ddgst": ${ddgst:-false} 00:09:03.819 }, 00:09:03.819 "method": "bdev_nvme_attach_controller" 00:09:03.819 } 00:09:03.819 EOF 00:09:03.819 )") 00:09:03.819 [2024-11-06 13:50:49.881786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.819 [2024-11-06 13:50:49.881815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.819 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:03.819 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:03.819 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:03.819 13:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:03.819 "params": { 00:09:03.819 "name": "Nvme1", 00:09:03.819 "trtype": "tcp", 00:09:03.819 "traddr": "10.0.0.2", 00:09:03.819 "adrfam": "ipv4", 00:09:03.819 "trsvcid": "4420", 00:09:03.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.819 "hdgst": false, 00:09:03.819 "ddgst": false 00:09:03.819 }, 00:09:03.819 "method": "bdev_nvme_attach_controller" 00:09:03.819 }' 00:09:03.819 [2024-11-06 13:50:49.893787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.819 [2024-11-06 13:50:49.893795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.819 [2024-11-06 13:50:49.905812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.819 [2024-11-06 13:50:49.905823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.819 [2024-11-06 13:50:49.917843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.820 [2024-11-06 13:50:49.917850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.820 [2024-11-06 13:50:49.926245] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:09:03.820 [2024-11-06 13:50:49.926302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253910 ] 00:09:03.820 [2024-11-06 13:50:49.929873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.820 [2024-11-06 13:50:49.929882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.820 [2024-11-06 13:50:49.941905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.820 [2024-11-06 13:50:49.941912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.820 [2024-11-06 13:50:49.953935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.820 [2024-11-06 13:50:49.953942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.820 [2024-11-06 13:50:49.965968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.820 [2024-11-06 13:50:49.965974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.820 [2024-11-06 13:50:49.977998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.820 [2024-11-06 13:50:49.978006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.820 [2024-11-06 13:50:49.990027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.820 [2024-11-06 13:50:49.990034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.820 [2024-11-06 13:50:50.002063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.820 [2024-11-06 13:50:50.002072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.820 [2024-11-06 13:50:50.009139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.820 [2024-11-06 13:50:50.014088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.820 [2024-11-06 13:50:50.014098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.820 [2024-11-06 13:50:50.026120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.820 [2024-11-06 13:50:50.026130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.820 [2024-11-06 13:50:50.038150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.820 [2024-11-06 13:50:50.038159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.820 [2024-11-06 13:50:50.038974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.820 [2024-11-06 13:50:50.050185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.820 [2024-11-06 13:50:50.050195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.820 [2024-11-06 13:50:50.062216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.820 [2024-11-06 13:50:50.062229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.820 [2024-11-06 13:50:50.074243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.820 [2024-11-06 13:50:50.074254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.820 [2024-11-06 13:50:50.086274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.820 [2024-11-06 13:50:50.086283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.098304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.098311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.110344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.110359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.122373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.122385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.134405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.134417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.146435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.146445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.158467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.158475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.170496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.170502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.182528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.182534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.194560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.194569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.206590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.206596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.218619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.218626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.230651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.230659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.242681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.242690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.254713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.254719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.266748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.266755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.278779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.278787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.291469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.291483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 Running I/O for 5 seconds... 00:09:04.081 [2024-11-06 13:50:50.302843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.302854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.318003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.318018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.331287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.331305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.344808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.344822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.081 [2024-11-06 13:50:50.357408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.081 [2024-11-06 13:50:50.357423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.371053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.371068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.384452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.384468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.397966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.397981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.411149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.411164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.424036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.424051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.436758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.436772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.449265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.449280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.461792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.461807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.474417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.474431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.487816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.487830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.500602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.500616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.513406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.513420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.526848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.526863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.540306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.540321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.553899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.553913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.566931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.566945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.579588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.579607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.592173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.592188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.604585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.604600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.342 [2024-11-06 13:50:50.618184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.342 [2024-11-06 13:50:50.618199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.631701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.631715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.645213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.645227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.658261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.658276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.671464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.671479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.685117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.685131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.698659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.698672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.711916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.711930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.724461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.724476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.736792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.736807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.750578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.750593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.763619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.763634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.777203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.777217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.790425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.790439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.804030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.804045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.817353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.817368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.830396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.830414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.843697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.843711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.857139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.857154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.604 [2024-11-06 13:50:50.869598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.604 [2024-11-06 13:50:50.869613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.865 [2024-11-06 13:50:50.883456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.865 [2024-11-06 13:50:50.883471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.865 [2024-11-06 13:50:50.896149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.865 [2024-11-06 13:50:50.896164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.865 [2024-11-06 13:50:50.908521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.865 [2024-11-06 13:50:50.908535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.865 [2024-11-06 13:50:50.921772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.865 [2024-11-06 13:50:50.921787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.865 [2024-11-06 13:50:50.934596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.865 [2024-11-06 13:50:50.934611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.865 [2024-11-06 13:50:50.947762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.865 [2024-11-06 13:50:50.947777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.865 [2024-11-06 13:50:50.960854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.865 [2024-11-06 13:50:50.960869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.865 [2024-11-06 13:50:50.974126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.865 [2024-11-06 13:50:50.974141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.865 [2024-11-06 13:50:50.986991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.865 [2024-11-06 13:50:50.987005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.865 [2024-11-06 13:50:51.000194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.865 [2024-11-06 13:50:51.000208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.865 [2024-11-06 13:50:51.013447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.865 [2024-11-06 13:50:51.013462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.865 [2024-11-06 13:50:51.026740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.865 [2024-11-06 13:50:51.026760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.865 [2024-11-06 13:50:51.039960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.865 [2024-11-06 13:50:51.039975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.865 [2024-11-06 13:50:51.053371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.865 [2024-11-06 13:50:51.053386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.865 [2024-11-06 13:50:51.066615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.865 [2024-11-06 13:50:51.066629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.865 [2024-11-06 13:50:51.079288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.866 [2024-11-06 13:50:51.079303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.866 [2024-11-06 13:50:51.091979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.866 [2024-11-06 13:50:51.091994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.866 [2024-11-06 13:50:51.105092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.866 [2024-11-06 13:50:51.105106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.866 [2024-11-06 13:50:51.118484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.866 [2024-11-06 13:50:51.118498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.866 [2024-11-06 13:50:51.131130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.866 [2024-11-06 13:50:51.131145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.144161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.144175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.158025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.158040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.170872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.170887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.183157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.183172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.196736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.196755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.210256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.210270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.222799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.222814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.236212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.236226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.248989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.249003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.261340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.261355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.274279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.274293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.287962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.287976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.300565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.300579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 19062.00 IOPS, 148.92 MiB/s [2024-11-06T12:50:51.407Z] [2024-11-06 13:50:51.313302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.313316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.326918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.326932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.339717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.339732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.353037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.353051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.366500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.366515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.379422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.379436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.127 [2024-11-06 13:50:51.392717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.127 [2024-11-06 13:50:51.392732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.406460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.406475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.419189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.419203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.431708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.431722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.444172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.444187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.456916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.456930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.470235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.470249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.483585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.483600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.496952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.496966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.510510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.510524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.523134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.523148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.535417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.535431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.548576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.548590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.562154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.562172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.574920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.574934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.587502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.587517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.600736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.600754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.613252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.613265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.625608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.625622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.638943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.638957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.651339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.651353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.388 [2024-11-06 13:50:51.664638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.388 [2024-11-06 13:50:51.664653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.677960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.677975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.691419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.691433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.704823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.704837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.718112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.718126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.730543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.730557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.743794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.743809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.756596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.756611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.769862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.769876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.782166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.782180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.794831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.794845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.807961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.807979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.821249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.821263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.834178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.834192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.847761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.847775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.861494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.861508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.874221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.874236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.886809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.886823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.899985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.899999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.913445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.913460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.649 [2024-11-06 13:50:51.926288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.649 [2024-11-06 13:50:51.926302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:51.939239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:51.939254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:51.952651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:51.952666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:51.965034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:51.965049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:51.978263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:51.978278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:51.991100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:51.991114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:52.004672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:52.004687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:52.017994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:52.018008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:52.031465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:52.031480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:52.044330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:52.044344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:52.057777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:52.057796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:52.070502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:52.070516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:52.083769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:52.083784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:52.097084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:52.097098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:52.110355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:52.110369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:52.123834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:52.123848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:52.136575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:52.136589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:52.149766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:52.149781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:52.162756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:52.162770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.910 [2024-11-06 13:50:52.175646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.910 [2024-11-06 13:50:52.175660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.170 [2024-11-06 13:50:52.188730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.170 [2024-11-06 13:50:52.188749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.170 [2024-11-06 13:50:52.201845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.170 [2024-11-06 13:50:52.201860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.170 [2024-11-06 13:50:52.214968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.170 [2024-11-06 13:50:52.214982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.170 [2024-11-06 13:50:52.227882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.170 [2024-11-06 13:50:52.227896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.170 [2024-11-06 13:50:52.241405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.170 [2024-11-06 13:50:52.241419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.170 [2024-11-06 13:50:52.255095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.170 [2024-11-06 13:50:52.255109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.170 [2024-11-06 13:50:52.267675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.170 [2024-11-06 13:50:52.267689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.170 [2024-11-06 13:50:52.279787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.170 [2024-11-06 13:50:52.279801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.170 [2024-11-06 13:50:52.293324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.170 [2024-11-06 13:50:52.293339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.170 [2024-11-06 13:50:52.305787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.170 [2024-11-06 13:50:52.305806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.170 19174.00 IOPS, 149.80 MiB/s [2024-11-06T12:50:52.450Z] [2024-11-06 13:50:52.318401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.170 [2024-11-06 13:50:52.318415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.170 [2024-11-06 13:50:52.330972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.170 [2024-11-06 13:50:52.330986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.170 [2024-11-06 13:50:52.344369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.170 [2024-11-06 13:50:52.344384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.170 [2024-11-06 13:50:52.357566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.170 [2024-11-06 13:50:52.357581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.170 [2024-11-06 13:50:52.371212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.170 [2024-11-06 13:50:52.371226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.170 [2024-11-06 13:50:52.384842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.170 [2024-11-06 13:50:52.384856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.170 [2024-11-06 13:50:52.397475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.171 [2024-11-06 13:50:52.397489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.171 [2024-11-06 13:50:52.410034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.171 [2024-11-06 13:50:52.410050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.171 [2024-11-06 13:50:52.423554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.171 [2024-11-06 13:50:52.423569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.171 [2024-11-06 13:50:52.436160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.171 [2024-11-06 13:50:52.436175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.448605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.448620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.461785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.461800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.474257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.474271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.487286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.487301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.500038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.500053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.512232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.512247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.525122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.525137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.538484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.538499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.551425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.551440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.565265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.565280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.577759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.577773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.591207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.591222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.604827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.604842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.617842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.617856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.630474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.630489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.643002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.643017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.655497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.655511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.668750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.668764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.682455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.682470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.695609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.695623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.431 [2024-11-06 13:50:52.709068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.431 [2024-11-06 13:50:52.709082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.722341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.722356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.734879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.734893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.747720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.747734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.761371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.761385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.775044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.775059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.788318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.788332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.800970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.800984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.813612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.813627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.827297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.827313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.840480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.840495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.853645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.853659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.867165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.867179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.879733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.879753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.893216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.893230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.905617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.905632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.918336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.918350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.931843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.931858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.944925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.944940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.692 [2024-11-06 13:50:52.958329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.692 [2024-11-06 13:50:52.958343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.952 [2024-11-06 13:50:52.971270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.952 [2024-11-06 13:50:52.971285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.952 [2024-11-06 13:50:52.984533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.952 [2024-11-06 13:50:52.984547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.952 [2024-11-06 13:50:52.997641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.952 [2024-11-06 13:50:52.997656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.952 [2024-11-06 13:50:53.010545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.952 [2024-11-06 13:50:53.010560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.952 [2024-11-06 13:50:53.023889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.952 [2024-11-06 13:50:53.023904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.952 [2024-11-06 13:50:53.036524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.952 [2024-11-06 13:50:53.036539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.952 [2024-11-06 13:50:53.049150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.952 [2024-11-06 13:50:53.049165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.952 [2024-11-06 13:50:53.062423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.952 [2024-11-06 13:50:53.062438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.952 [2024-11-06 13:50:53.076100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.952 [2024-11-06 13:50:53.076114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.952 [2024-11-06 13:50:53.089241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.952 [2024-11-06 13:50:53.089255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.952 [2024-11-06 13:50:53.102984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.952 [2024-11-06 13:50:53.102999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.952 [2024-11-06 13:50:53.116237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.952 [2024-11-06 13:50:53.116252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.952 [2024-11-06 13:50:53.129701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.952 [2024-11-06 13:50:53.129716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.952 [2024-11-06 13:50:53.143033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.952 [2024-11-06 13:50:53.143048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.952 [2024-11-06 13:50:53.156583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.952 [2024-11-06 13:50:53.156597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.953 [2024-11-06 13:50:53.169969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.953 [2024-11-06 13:50:53.169983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.953 [2024-11-06 13:50:53.182506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.953 [2024-11-06 13:50:53.182520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.953 [2024-11-06 13:50:53.195712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.953 [2024-11-06 13:50:53.195727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.953 [2024-11-06 13:50:53.209079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.953 [2024-11-06 13:50:53.209093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.953 [2024-11-06 13:50:53.222582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.953 [2024-11-06 13:50:53.222597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.235554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.235569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.249244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.249259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.262605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.262619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.275637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.275651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.288473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.288491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.301187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.301202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 19213.33 IOPS, 150.10 MiB/s [2024-11-06T12:50:53.493Z] [2024-11-06 13:50:53.313675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.313689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.326084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.326098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.340118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.340132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.352369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.352383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.364830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.364844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.377963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.377977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.391047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.391061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.404397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.404411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.417975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.417989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.430526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.430540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.443013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.443027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.456455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.456469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.469016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.469030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.213 [2024-11-06 13:50:53.481599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.213 [2024-11-06 13:50:53.481614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.493972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.493986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.506459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.506474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.519121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.519136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.531773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.531792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.544573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.544587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.557535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.557549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.570970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.570984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.584727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.584741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.596749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.596764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.609829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.609843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.623051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.623065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.635488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.635503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.649235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.649250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.662326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.662341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.675679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.675694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.688998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.689012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.702597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.702612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.715575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.715589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.729228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.729242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.474 [2024-11-06 13:50:53.741898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.474 [2024-11-06 13:50:53.741912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.736 [2024-11-06 13:50:53.754570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.736 [2024-11-06 13:50:53.754584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.736 [2024-11-06 13:50:53.767201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.736 [2024-11-06 13:50:53.767216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.736 [2024-11-06 13:50:53.779755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.736 [2024-11-06 13:50:53.779773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.736 [2024-11-06 13:50:53.792616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.736 [2024-11-06 13:50:53.792630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.736 [2024-11-06 13:50:53.805872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.736 [2024-11-06 13:50:53.805886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.736 [2024-11-06 13:50:53.818862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.736 [2024-11-06 13:50:53.818876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.736 [2024-11-06 13:50:53.832397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.736 [2024-11-06 13:50:53.832411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.736 [2024-11-06 13:50:53.845129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.736 [2024-11-06 13:50:53.845144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.736 [2024-11-06 13:50:53.857583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.736 [2024-11-06 13:50:53.857597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.736 [2024-11-06 13:50:53.871138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.736 [2024-11-06 13:50:53.871152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.736 [2024-11-06 13:50:53.884005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.736 [2024-11-06 13:50:53.884019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.736 [2024-11-06 13:50:53.897558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.736 [2024-11-06 13:50:53.897572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.736 [2024-11-06 13:50:53.911033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.736 [2024-11-06 13:50:53.911048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.737 [2024-11-06 13:50:53.924856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.737 [2024-11-06 13:50:53.924871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.737 [2024-11-06 13:50:53.938410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.737 [2024-11-06 13:50:53.938426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.737 [2024-11-06 13:50:53.951873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.737 [2024-11-06 13:50:53.951887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.737 [2024-11-06 13:50:53.965240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.737 [2024-11-06 13:50:53.965254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.737 [2024-11-06 13:50:53.978127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.737 [2024-11-06 13:50:53.978142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.737 [2024-11-06 13:50:53.991707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.737 [2024-11-06 13:50:53.991721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.737 [2024-11-06 13:50:54.005255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.737 [2024-11-06 13:50:54.005269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.999 [2024-11-06 13:50:54.018682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.999 [2024-11-06 13:50:54.018697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.999 [2024-11-06 13:50:54.031988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.999 [2024-11-06 13:50:54.032002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.999 [2024-11-06 13:50:54.045416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.999 [2024-11-06 13:50:54.045430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.999 [2024-11-06 13:50:54.058406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.999 [2024-11-06 13:50:54.058421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.999 [2024-11-06 13:50:54.071613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.999 [2024-11-06 13:50:54.071627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.999 [2024-11-06 13:50:54.084768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.999 [2024-11-06 13:50:54.084782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.999 [2024-11-06 13:50:54.098347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.999 [2024-11-06 13:50:54.098361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.999 [2024-11-06 13:50:54.111142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.999 [2024-11-06 13:50:54.111156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.999 [2024-11-06 13:50:54.123269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.999 [2024-11-06 13:50:54.123283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.000 [2024-11-06 13:50:54.136286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.000 [2024-11-06 13:50:54.136300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.000 [2024-11-06 13:50:54.149504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.000 [2024-11-06 13:50:54.149518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.000 [2024-11-06 13:50:54.162716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.000 [2024-11-06 13:50:54.162731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.000 [2024-11-06 13:50:54.176010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.000 [2024-11-06 13:50:54.176025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.000 [2024-11-06 13:50:54.188691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.000 [2024-11-06 13:50:54.188706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.000 [2024-11-06 13:50:54.202591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.000 [2024-11-06 13:50:54.202605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.000 [2024-11-06 13:50:54.215943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.000 [2024-11-06 13:50:54.215957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.000 [2024-11-06 13:50:54.228726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.000 [2024-11-06 13:50:54.228741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.000 [2024-11-06 13:50:54.242071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.000 [2024-11-06 13:50:54.242085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.000 [2024-11-06 13:50:54.255547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.000 [2024-11-06 13:50:54.255562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.000 [2024-11-06 13:50:54.269085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.000 [2024-11-06 13:50:54.269099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.260 [2024-11-06 13:50:54.281829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.260 [2024-11-06 13:50:54.281844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.260 [2024-11-06 13:50:54.294386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.260 [2024-11-06 13:50:54.294400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.260 [2024-11-06 13:50:54.307420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.260 [2024-11-06 13:50:54.307435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.260 19239.25 IOPS, 150.31 MiB/s [2024-11-06T12:50:54.540Z] [2024-11-06 13:50:54.321085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.260 [2024-11-06 13:50:54.321099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.260 [2024-11-06 13:50:54.334516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.260 [2024-11-06 13:50:54.334530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.260 [2024-11-06 13:50:54.346968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.260 [2024-11-06 13:50:54.346983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.260 [2024-11-06 13:50:54.360167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.260 [2024-11-06 13:50:54.360181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.260 [2024-11-06 13:50:54.373079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.260 [2024-11-06 13:50:54.373094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.260 [2024-11-06 13:50:54.386457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.261 [2024-11-06 13:50:54.386472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.261 [2024-11-06 13:50:54.398803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.261 [2024-11-06 13:50:54.398818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.261 [2024-11-06 13:50:54.412398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.261 [2024-11-06 13:50:54.412413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.261 [2024-11-06 13:50:54.425413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.261 [2024-11-06 13:50:54.425429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.261 [2024-11-06 13:50:54.438275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.261 [2024-11-06 13:50:54.438290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.261 [2024-11-06 13:50:54.451126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.261 [2024-11-06 13:50:54.451140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.261 [2024-11-06 13:50:54.464590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.261 [2024-11-06 13:50:54.464604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.261 [2024-11-06 13:50:54.478037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.261 [2024-11-06 13:50:54.478052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.261 [2024-11-06 13:50:54.491330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.261 [2024-11-06 13:50:54.491344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.261 [2024-11-06 13:50:54.505125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.261 [2024-11-06 13:50:54.505139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.261 [2024-11-06 13:50:54.518004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.261 [2024-11-06 13:50:54.518018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.261 [2024-11-06 13:50:54.531035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.261 [2024-11-06 13:50:54.531050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.544146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.544161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.557222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.557236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.570072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.570086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.583755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.583769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.597104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.597118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.609229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.609244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.622412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.622426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.636046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.636060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.649057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.649071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.662683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.662698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.675117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.675132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.688903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.688918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.701532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.701546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.714658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.714672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.727816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.727830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.741100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.741114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.754930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.754945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.768199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.768222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.781813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.781828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.521 [2024-11-06 13:50:54.794899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.521 [2024-11-06 13:50:54.794913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.780 [2024-11-06 13:50:54.808252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:54.808267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:54.821791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:54.821806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:54.835222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:54.835236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:54.847788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:54.847803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:54.861659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:54.861673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:54.874210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:54.874225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:54.887324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:54.887339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:54.900085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:54.900100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:54.913180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:54.913195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:54.926748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:54.926763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:54.939882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:54.939896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:54.952585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:54.952599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:54.965657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:54.965671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:54.978989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:54.979003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:54.992422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:54.992437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:55.005423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:55.005438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:55.018120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:55.018138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:55.031395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:55.031408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:55.043850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:55.043864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.781 [2024-11-06 13:50:55.056299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.781 [2024-11-06 13:50:55.056313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.069481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.069496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.082018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.082032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.095089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.095103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.108489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.108503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.121460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.121474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.134733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.134751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.147973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.147987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.161575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.161589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.174699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.174713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.187060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.187074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.200501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.200515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.213526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.213540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.227032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.227047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.239681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.239696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.252681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.252696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.265957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.265979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.278565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.278580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.291670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.291684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 [2024-11-06 13:50:55.305239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.305253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.041 19244.20 IOPS, 150.35 MiB/s [2024-11-06T12:50:55.321Z] [2024-11-06 13:50:55.317343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.041 [2024-11-06 13:50:55.317357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.301 00:09:09.301 Latency(us) 00:09:09.301 [2024-11-06T12:50:55.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.301 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:09.301 Nvme1n1 : 5.01 19245.79 150.36 0.00 0.00 6644.45 3003.73 19879.25 00:09:09.301 [2024-11-06T12:50:55.581Z] =================================================================================================================== 00:09:09.301 [2024-11-06T12:50:55.581Z] Total : 19245.79 150.36 0.00 0.00 6644.45 3003.73 19879.25 00:09:09.301 [2024-11-06 13:50:55.327079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.301 [2024-11-06 13:50:55.327092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.301 [2024-11-06 13:50:55.339109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.301 [2024-11-06 13:50:55.339123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.301 [2024-11-06 13:50:55.351140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.301 [2024-11-06 13:50:55.351153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.301 [2024-11-06 13:50:55.363168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.301 [2024-11-06 13:50:55.363180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.301 [2024-11-06 13:50:55.375198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.301 [2024-11-06 13:50:55.375208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.301 [2024-11-06 13:50:55.387229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.301 [2024-11-06 13:50:55.387238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.301 [2024-11-06 13:50:55.399261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.301 [2024-11-06 13:50:55.399270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.301 [2024-11-06 13:50:55.411291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.301 [2024-11-06 13:50:55.411300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.301 [2024-11-06 13:50:55.423321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.301 [2024-11-06 13:50:55.423330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2253910) - No such process 00:09:09.301 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2253910 00:09:09.301 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.301 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.301 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.301 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.301 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:09.301 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.301 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.301 delay0 00:09:09.301 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.301 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:09.302 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.302 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.302 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.302 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:09.561 [2024-11-06 13:50:55.596188] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:17.692 Initializing NVMe Controllers 00:09:17.692 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:17.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:17.692 Initialization complete. Launching workers. 00:09:17.692 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 239, failed: 32799 00:09:17.692 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 32906, failed to submit 132 00:09:17.692 success 32823, unsuccessful 83, failed 0 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:17.692 rmmod nvme_tcp 00:09:17.692 rmmod nvme_fabrics 00:09:17.692 rmmod nvme_keyring 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2251538 ']' 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2251538 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 2251538 ']' 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 2251538 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2251538 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2251538' 00:09:17.692 killing process with pid 2251538 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 2251538 00:09:17.692 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 2251538 00:09:17.692 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:17.692 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:17.692 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:17.692 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:17.692 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:17.692 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:17.692 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:17.692 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:17.692 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:17.693 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.693 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.693 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.076 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:19.076 00:09:19.076 real 0m34.839s 00:09:19.076 user 0m45.838s 00:09:19.076 sys 0m11.959s 00:09:19.076 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:19.076 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.076 ************************************ 00:09:19.076 END TEST nvmf_zcopy 00:09:19.076 ************************************ 00:09:19.076 13:51:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:19.076 13:51:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:19.076 13:51:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:19.076 13:51:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.076 ************************************ 00:09:19.076 START TEST nvmf_nmic 00:09:19.076 ************************************ 00:09:19.076 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:19.076 * Looking for test storage... 00:09:19.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.076 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:19.076 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:19.076 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:19.337 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:19.337 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.337 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.337 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.337 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.337 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.337 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.337 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.337 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.337 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.337 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.337 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.337 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:19.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.338 --rc genhtml_branch_coverage=1 00:09:19.338 --rc genhtml_function_coverage=1 00:09:19.338 --rc genhtml_legend=1 00:09:19.338 --rc geninfo_all_blocks=1 00:09:19.338 --rc geninfo_unexecuted_blocks=1 00:09:19.338 00:09:19.338 ' 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:19.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.338 --rc genhtml_branch_coverage=1 00:09:19.338 --rc genhtml_function_coverage=1 00:09:19.338 --rc genhtml_legend=1 00:09:19.338 --rc geninfo_all_blocks=1 00:09:19.338 --rc geninfo_unexecuted_blocks=1 00:09:19.338 00:09:19.338 ' 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:19.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.338 --rc genhtml_branch_coverage=1 00:09:19.338 --rc genhtml_function_coverage=1 00:09:19.338 --rc genhtml_legend=1 00:09:19.338 --rc geninfo_all_blocks=1 00:09:19.338 --rc geninfo_unexecuted_blocks=1 00:09:19.338 00:09:19.338 ' 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:19.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.338 --rc genhtml_branch_coverage=1 00:09:19.338 --rc genhtml_function_coverage=1 00:09:19.338 --rc genhtml_legend=1 00:09:19.338 --rc geninfo_all_blocks=1 00:09:19.338 --rc geninfo_unexecuted_blocks=1 00:09:19.338 00:09:19.338 ' 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:19.338 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:27.475 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:27.475 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.475 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:27.476 Found net devices under 0000:31:00.0: cvl_0_0 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:27.476 Found net devices under 0000:31:00.1: cvl_0_1 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:27.476 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:27.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:09:27.476 00:09:27.476 --- 10.0.0.2 ping statistics --- 00:09:27.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.476 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:27.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:09:27.476 00:09:27.476 --- 10.0.0.1 ping statistics --- 00:09:27.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.476 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2261208 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2261208 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 2261208 ']' 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:27.476 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.476 [2024-11-06 13:51:13.219884] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:09:27.476 [2024-11-06 13:51:13.219951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.476 [2024-11-06 13:51:13.319782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:27.476 [2024-11-06 13:51:13.374120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.476 [2024-11-06 13:51:13.374166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.476 [2024-11-06 13:51:13.374176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.476 [2024-11-06 13:51:13.374187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.476 [2024-11-06 13:51:13.374193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.476 [2024-11-06 13:51:13.376266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.476 [2024-11-06 13:51:13.376425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.476 [2024-11-06 13:51:13.376563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.476 [2024-11-06 13:51:13.376565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.048 [2024-11-06 13:51:14.074076] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.048 Malloc0 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.048 [2024-11-06 13:51:14.151710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:28.048 test case1: single bdev can't be used in multiple subsystems 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.048 [2024-11-06 13:51:14.187560] bdev.c:8318:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:28.048 [2024-11-06 13:51:14.187587] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:28.048 [2024-11-06 13:51:14.187596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.048 request: 00:09:28.048 { 00:09:28.048 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:28.048 "namespace": { 00:09:28.048 "bdev_name": "Malloc0", 00:09:28.048 "no_auto_visible": false 00:09:28.048 }, 00:09:28.048 "method": "nvmf_subsystem_add_ns", 00:09:28.048 "req_id": 1 00:09:28.048 } 00:09:28.048 Got JSON-RPC error response 00:09:28.048 response: 00:09:28.048 { 00:09:28.048 "code": -32602, 00:09:28.048 "message": "Invalid parameters" 00:09:28.048 } 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:28.048 Adding namespace failed - expected result. 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:28.048 test case2: host connect to nvmf target in multiple paths 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.048 [2024-11-06 13:51:14.199770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.048 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:29.432 13:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:31.344 13:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:31.344 13:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:31.344 13:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.344 13:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:31.344 13:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:09:33.273 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:33.273 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:33.273 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.273 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:33.273 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.273 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:09:33.273 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:33.273 [global] 00:09:33.273 thread=1 00:09:33.273 invalidate=1 00:09:33.273 rw=write 00:09:33.273 time_based=1 00:09:33.273 runtime=1 00:09:33.273 ioengine=libaio 00:09:33.273 direct=1 00:09:33.273 bs=4096 00:09:33.273 iodepth=1 00:09:33.273 norandommap=0 00:09:33.273 numjobs=1 00:09:33.273 00:09:33.273 verify_dump=1 00:09:33.273 verify_backlog=512 00:09:33.273 verify_state_save=0 00:09:33.273 do_verify=1 00:09:33.273 verify=crc32c-intel 00:09:33.273 [job0] 00:09:33.273 filename=/dev/nvme0n1 00:09:33.273 Could not set queue depth (nvme0n1) 00:09:33.533 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.533 fio-3.35 00:09:33.533 Starting 1 thread 00:09:34.473 00:09:34.473 job0: (groupid=0, jobs=1): err= 0: pid=2262752: Wed Nov 6 13:51:20 2024 00:09:34.473 read: IOPS=17, BW=71.4KiB/s (73.1kB/s)(72.0KiB/1008msec) 00:09:34.473 slat (nsec): min=26782, max=29332, avg=27390.17, stdev=610.55 00:09:34.473 clat (usec): min=787, max=42992, avg=37384.63, stdev=13290.69 00:09:34.473 lat (usec): min=815, max=43019, avg=37412.02, stdev=13290.25 00:09:34.473 clat percentiles (usec): 00:09:34.473 | 1.00th=[ 791], 5.00th=[ 791], 10.00th=[ 963], 20.00th=[41157], 00:09:34.473 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:34.473 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:09:34.473 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:34.473 | 99.99th=[43254] 00:09:34.473 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:09:34.473 slat (usec): min=9, max=28053, avg=85.46, stdev=1238.50 00:09:34.473 clat (usec): min=269, max=1152, avg=561.48, stdev=105.21 00:09:34.473 lat (usec): min=280, max=28607, avg=646.95, stdev=1242.96 00:09:34.473 clat percentiles (usec): 00:09:34.473 | 1.00th=[ 326], 5.00th=[ 388], 10.00th=[ 420], 20.00th=[ 465], 00:09:34.473 | 30.00th=[ 502], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 586], 00:09:34.473 | 70.00th=[ 619], 80.00th=[ 660], 90.00th=[ 693], 95.00th=[ 717], 00:09:34.473 | 99.00th=[ 758], 99.50th=[ 783], 99.90th=[ 1156], 99.95th=[ 1156], 00:09:34.473 | 99.99th=[ 1156] 00:09:34.473 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:34.473 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:34.473 lat (usec) : 500=28.30%, 750=66.79%, 1000=1.70% 00:09:34.473 lat (msec) : 2=0.19%, 50=3.02% 00:09:34.473 cpu : usr=1.09%, sys=1.89%, ctx=533, majf=0, minf=1 00:09:34.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.473 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.473 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.473 00:09:34.473 Run status group 0 (all jobs): 00:09:34.473 READ: bw=71.4KiB/s (73.1kB/s), 71.4KiB/s-71.4KiB/s (73.1kB/s-73.1kB/s), io=72.0KiB (73.7kB), run=1008-1008msec 00:09:34.473 WRITE: bw=2032KiB/s (2081kB/s), 2032KiB/s-2032KiB/s (2081kB/s-2081kB/s), io=2048KiB (2097kB), run=1008-1008msec 00:09:34.473 00:09:34.473 Disk stats (read/write): 00:09:34.473 nvme0n1: ios=40/512, merge=0/0, ticks=1517/230, in_queue=1747, util=98.90% 00:09:34.473 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:34.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:34.733 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:34.733 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:09:34.733 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:34.733 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.733 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:34.733 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.733 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:09:34.733 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:34.733 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:34.733 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:34.733 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:34.733 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.733 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:34.733 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.733 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.733 rmmod nvme_tcp 00:09:34.733 rmmod nvme_fabrics 00:09:34.993 rmmod nvme_keyring 00:09:34.993 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.993 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2261208 ']' 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2261208 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 2261208 ']' 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 2261208 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2261208 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2261208' 00:09:34.994 killing process with pid 2261208 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 2261208 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 2261208 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.994 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:37.535 00:09:37.535 real 0m18.130s 00:09:37.535 user 0m45.441s 00:09:37.535 sys 0m6.762s 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:37.535 ************************************ 00:09:37.535 END TEST nvmf_nmic 00:09:37.535 ************************************ 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.535 ************************************ 00:09:37.535 START TEST nvmf_fio_target 00:09:37.535 ************************************ 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:37.535 * Looking for test storage... 00:09:37.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.535 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:37.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.536 --rc genhtml_branch_coverage=1 00:09:37.536 --rc genhtml_function_coverage=1 00:09:37.536 --rc genhtml_legend=1 00:09:37.536 --rc geninfo_all_blocks=1 00:09:37.536 --rc geninfo_unexecuted_blocks=1 00:09:37.536 00:09:37.536 ' 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:37.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.536 --rc genhtml_branch_coverage=1 00:09:37.536 --rc genhtml_function_coverage=1 00:09:37.536 --rc genhtml_legend=1 00:09:37.536 --rc geninfo_all_blocks=1 00:09:37.536 --rc geninfo_unexecuted_blocks=1 00:09:37.536 00:09:37.536 ' 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:37.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.536 --rc genhtml_branch_coverage=1 00:09:37.536 --rc genhtml_function_coverage=1 00:09:37.536 --rc genhtml_legend=1 00:09:37.536 --rc geninfo_all_blocks=1 00:09:37.536 --rc geninfo_unexecuted_blocks=1 00:09:37.536 00:09:37.536 ' 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:37.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.536 --rc genhtml_branch_coverage=1 00:09:37.536 --rc genhtml_function_coverage=1 00:09:37.536 --rc genhtml_legend=1 00:09:37.536 --rc geninfo_all_blocks=1 00:09:37.536 --rc geninfo_unexecuted_blocks=1 00:09:37.536 00:09:37.536 ' 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:37.536 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:45.736 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:45.736 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:45.736 Found net devices under 0000:31:00.0: cvl_0_0 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:45.736 Found net devices under 0000:31:00.1: cvl_0_1 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:45.736 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.737 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.737 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:45.737 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:45.737 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.737 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.737 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:45.737 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:45.737 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.737 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:45.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:09:45.737 00:09:45.737 --- 10.0.0.2 ping statistics --- 00:09:45.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.737 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:09:45.737 00:09:45.737 --- 10.0.0.1 ping statistics --- 00:09:45.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.737 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2267287 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2267287 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 2267287 ']' 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:45.737 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.737 [2024-11-06 13:51:31.309432] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:09:45.737 [2024-11-06 13:51:31.309498] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.737 [2024-11-06 13:51:31.409839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.737 [2024-11-06 13:51:31.464614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.737 [2024-11-06 13:51:31.464664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.737 [2024-11-06 13:51:31.464674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.737 [2024-11-06 13:51:31.464681] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.737 [2024-11-06 13:51:31.464687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.737 [2024-11-06 13:51:31.466769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.737 [2024-11-06 13:51:31.466906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.737 [2024-11-06 13:51:31.467182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.737 [2024-11-06 13:51:31.467186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.999 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:45.999 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:09:45.999 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:45.999 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:45.999 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.999 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.999 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:46.260 [2024-11-06 13:51:32.342423] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.260 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.521 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:46.521 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.783 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:46.783 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.783 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:46.783 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.044 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:47.045 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:47.306 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.566 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:47.566 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.827 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:47.827 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.827 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:47.827 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:48.088 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:48.350 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:48.350 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:48.611 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:48.611 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:48.611 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.872 [2024-11-06 13:51:34.970988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.872 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:49.133 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:49.133 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:51.047 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:51.047 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:51.048 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:51.048 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:51.048 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:51.048 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:52.960 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:52.960 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:52.960 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:52.960 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:52.960 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:52.960 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:52.960 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:52.960 [global] 00:09:52.960 thread=1 00:09:52.960 invalidate=1 00:09:52.960 rw=write 00:09:52.960 time_based=1 00:09:52.960 runtime=1 00:09:52.960 ioengine=libaio 00:09:52.960 direct=1 00:09:52.960 bs=4096 00:09:52.960 iodepth=1 00:09:52.960 norandommap=0 00:09:52.960 numjobs=1 00:09:52.960 00:09:52.960 verify_dump=1 00:09:52.960 verify_backlog=512 00:09:52.960 verify_state_save=0 00:09:52.960 do_verify=1 00:09:52.960 verify=crc32c-intel 00:09:52.960 [job0] 00:09:52.960 filename=/dev/nvme0n1 00:09:52.960 [job1] 00:09:52.960 filename=/dev/nvme0n2 00:09:52.960 [job2] 00:09:52.960 filename=/dev/nvme0n3 00:09:52.960 [job3] 00:09:52.960 filename=/dev/nvme0n4 00:09:52.960 Could not set queue depth (nvme0n1) 00:09:52.960 Could not set queue depth (nvme0n2) 00:09:52.960 Could not set queue depth (nvme0n3) 00:09:52.960 Could not set queue depth (nvme0n4) 00:09:53.220 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.220 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.220 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.220 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.220 fio-3.35 00:09:53.220 Starting 4 threads 00:09:54.602 00:09:54.602 job0: (groupid=0, jobs=1): err= 0: pid=2269058: Wed Nov 6 13:51:40 2024 00:09:54.602 read: IOPS=18, BW=74.3KiB/s (76.1kB/s)(76.0KiB/1023msec) 00:09:54.602 slat (nsec): min=18236, max=32330, avg=26080.95, stdev=2371.57 00:09:54.602 clat (usec): min=40912, max=42032, avg=41422.23, stdev=494.41 00:09:54.602 lat (usec): min=40938, max=42050, avg=41448.31, stdev=494.15 00:09:54.602 clat percentiles (usec): 00:09:54.602 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:54.602 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:09:54.602 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:54.602 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:54.602 | 99.99th=[42206] 00:09:54.602 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:09:54.602 slat (nsec): min=9940, max=61183, avg=31039.81, stdev=9349.43 00:09:54.602 clat (usec): min=191, max=707, avg=409.03, stdev=95.43 00:09:54.602 lat (usec): min=225, max=742, avg=440.07, stdev=97.14 00:09:54.602 clat percentiles (usec): 00:09:54.602 | 1.00th=[ 227], 5.00th=[ 269], 10.00th=[ 293], 20.00th=[ 326], 00:09:54.602 | 30.00th=[ 351], 40.00th=[ 367], 50.00th=[ 392], 60.00th=[ 429], 00:09:54.602 | 70.00th=[ 465], 80.00th=[ 498], 90.00th=[ 537], 95.00th=[ 562], 00:09:54.602 | 99.00th=[ 619], 99.50th=[ 676], 99.90th=[ 709], 99.95th=[ 709], 00:09:54.602 | 99.99th=[ 709] 00:09:54.602 bw ( KiB/s): min= 4096, max= 4096, per=44.11%, avg=4096.00, stdev= 0.00, samples=1 00:09:54.602 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:54.602 lat (usec) : 250=2.82%, 500=74.39%, 750=19.21% 00:09:54.602 lat (msec) : 50=3.58% 00:09:54.602 cpu : usr=0.59%, sys=1.66%, ctx=532, majf=0, minf=1 00:09:54.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.602 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.602 job1: (groupid=0, jobs=1): err= 0: pid=2269059: Wed Nov 6 13:51:40 2024 00:09:54.602 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:54.602 slat (nsec): min=7092, max=63766, avg=25973.96, stdev=3697.33 00:09:54.602 clat (usec): min=341, max=1581, avg=914.08, stdev=128.86 00:09:54.602 lat (usec): min=367, max=1606, avg=940.06, stdev=128.78 00:09:54.602 clat percentiles (usec): 00:09:54.602 | 1.00th=[ 545], 5.00th=[ 676], 10.00th=[ 734], 20.00th=[ 816], 00:09:54.602 | 30.00th=[ 865], 40.00th=[ 914], 50.00th=[ 947], 60.00th=[ 971], 00:09:54.602 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1045], 95.00th=[ 1074], 00:09:54.602 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1582], 99.95th=[ 1582], 00:09:54.602 | 99.99th=[ 1582] 00:09:54.602 write: IOPS=838, BW=3353KiB/s (3433kB/s)(3356KiB/1001msec); 0 zone resets 00:09:54.602 slat (nsec): min=9711, max=64206, avg=29674.35, stdev=10134.62 00:09:54.602 clat (usec): min=212, max=1107, avg=570.13, stdev=132.81 00:09:54.602 lat (usec): min=224, max=1142, avg=599.81, stdev=136.90 00:09:54.602 clat percentiles (usec): 00:09:54.602 | 1.00th=[ 227], 5.00th=[ 330], 10.00th=[ 383], 20.00th=[ 461], 00:09:54.602 | 30.00th=[ 506], 40.00th=[ 545], 50.00th=[ 586], 60.00th=[ 611], 00:09:54.602 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 758], 00:09:54.602 | 99.00th=[ 832], 99.50th=[ 848], 99.90th=[ 1106], 99.95th=[ 1106], 00:09:54.602 | 99.99th=[ 1106] 00:09:54.602 bw ( KiB/s): min= 4096, max= 4096, per=44.11%, avg=4096.00, stdev= 0.00, samples=1 00:09:54.602 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:54.602 lat (usec) : 250=1.11%, 500=16.88%, 750=44.41%, 1000=27.83% 00:09:54.602 lat (msec) : 2=9.77% 00:09:54.602 cpu : usr=1.90%, sys=4.00%, ctx=1354, majf=0, minf=1 00:09:54.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.602 issued rwts: total=512,839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.602 job2: (groupid=0, jobs=1): err= 0: pid=2269061: Wed Nov 6 13:51:40 2024 00:09:54.602 read: IOPS=16, BW=67.6KiB/s (69.2kB/s)(68.0KiB/1006msec) 00:09:54.602 slat (nsec): min=25993, max=26478, avg=26174.65, stdev=149.03 00:09:54.602 clat (usec): min=1244, max=43021, avg=39686.81, stdev=9911.85 00:09:54.602 lat (usec): min=1270, max=43047, avg=39712.99, stdev=9911.85 00:09:54.602 clat percentiles (usec): 00:09:54.602 | 1.00th=[ 1237], 5.00th=[ 1237], 10.00th=[41681], 20.00th=[41681], 00:09:54.602 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:54.602 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:09:54.602 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:54.602 | 99.99th=[43254] 00:09:54.602 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:09:54.602 slat (nsec): min=10149, max=53350, avg=31663.23, stdev=8590.23 00:09:54.602 clat (usec): min=269, max=975, avg=595.33, stdev=126.38 00:09:54.602 lat (usec): min=280, max=1009, avg=626.99, stdev=129.41 00:09:54.602 clat percentiles (usec): 00:09:54.602 | 1.00th=[ 306], 5.00th=[ 379], 10.00th=[ 424], 20.00th=[ 482], 00:09:54.602 | 30.00th=[ 523], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 635], 00:09:54.602 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 799], 00:09:54.602 | 99.00th=[ 848], 99.50th=[ 873], 99.90th=[ 979], 99.95th=[ 979], 00:09:54.602 | 99.99th=[ 979] 00:09:54.602 bw ( KiB/s): min= 4096, max= 4096, per=44.11%, avg=4096.00, stdev= 0.00, samples=1 00:09:54.602 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:54.602 lat (usec) : 500=21.93%, 750=63.14%, 1000=11.72% 00:09:54.602 lat (msec) : 2=0.19%, 50=3.02% 00:09:54.602 cpu : usr=1.29%, sys=1.09%, ctx=530, majf=0, minf=1 00:09:54.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.602 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.602 job3: (groupid=0, jobs=1): err= 0: pid=2269067: Wed Nov 6 13:51:40 2024 00:09:54.602 read: IOPS=18, BW=75.2KiB/s (77.0kB/s)(76.0KiB/1011msec) 00:09:54.602 slat (nsec): min=25992, max=26599, avg=26247.26, stdev=181.04 00:09:54.602 clat (usec): min=40939, max=41987, avg=41537.79, stdev=492.93 00:09:54.603 lat (usec): min=40965, max=42013, avg=41564.04, stdev=492.93 00:09:54.603 clat percentiles (usec): 00:09:54.603 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:54.603 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:09:54.603 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:54.603 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:54.603 | 99.99th=[42206] 00:09:54.603 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:09:54.603 slat (nsec): min=10008, max=64750, avg=29166.78, stdev=10876.70 00:09:54.603 clat (usec): min=126, max=883, avg=388.21, stdev=138.57 00:09:54.603 lat (usec): min=136, max=918, avg=417.38, stdev=140.61 00:09:54.603 clat percentiles (usec): 00:09:54.603 | 1.00th=[ 131], 5.00th=[ 167], 10.00th=[ 198], 20.00th=[ 273], 00:09:54.603 | 30.00th=[ 310], 40.00th=[ 334], 50.00th=[ 371], 60.00th=[ 424], 00:09:54.603 | 70.00th=[ 457], 80.00th=[ 523], 90.00th=[ 578], 95.00th=[ 611], 00:09:54.603 | 99.00th=[ 701], 99.50th=[ 775], 99.90th=[ 881], 99.95th=[ 881], 00:09:54.603 | 99.99th=[ 881] 00:09:54.603 bw ( KiB/s): min= 4096, max= 4096, per=44.11%, avg=4096.00, stdev= 0.00, samples=1 00:09:54.603 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:54.603 lat (usec) : 250=14.31%, 500=59.89%, 750=21.66%, 1000=0.56% 00:09:54.603 lat (msec) : 50=3.58% 00:09:54.603 cpu : usr=0.69%, sys=1.49%, ctx=535, majf=0, minf=1 00:09:54.603 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.603 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.603 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.603 00:09:54.603 Run status group 0 (all jobs): 00:09:54.603 READ: bw=2217KiB/s (2270kB/s), 67.6KiB/s-2046KiB/s (69.2kB/s-2095kB/s), io=2268KiB (2322kB), run=1001-1023msec 00:09:54.603 WRITE: bw=9286KiB/s (9509kB/s), 2002KiB/s-3353KiB/s (2050kB/s-3433kB/s), io=9500KiB (9728kB), run=1001-1023msec 00:09:54.603 00:09:54.603 Disk stats (read/write): 00:09:54.603 nvme0n1: ios=38/512, merge=0/0, ticks=1504/197, in_queue=1701, util=99.00% 00:09:54.603 nvme0n2: ios=521/512, merge=0/0, ticks=1024/292, in_queue=1316, util=99.58% 00:09:54.603 nvme0n3: ios=32/512, merge=0/0, ticks=1308/284, in_queue=1592, util=99.56% 00:09:54.603 nvme0n4: ios=70/512, merge=0/0, ticks=872/185, in_queue=1057, util=99.77% 00:09:54.603 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:54.603 [global] 00:09:54.603 thread=1 00:09:54.603 invalidate=1 00:09:54.603 rw=randwrite 00:09:54.603 time_based=1 00:09:54.603 runtime=1 00:09:54.603 ioengine=libaio 00:09:54.603 direct=1 00:09:54.603 bs=4096 00:09:54.603 iodepth=1 00:09:54.603 norandommap=0 00:09:54.603 numjobs=1 00:09:54.603 00:09:54.603 verify_dump=1 00:09:54.603 verify_backlog=512 00:09:54.603 verify_state_save=0 00:09:54.603 do_verify=1 00:09:54.603 verify=crc32c-intel 00:09:54.603 [job0] 00:09:54.603 filename=/dev/nvme0n1 00:09:54.603 [job1] 00:09:54.603 filename=/dev/nvme0n2 00:09:54.603 [job2] 00:09:54.603 filename=/dev/nvme0n3 00:09:54.603 [job3] 00:09:54.603 filename=/dev/nvme0n4 00:09:54.603 Could not set queue depth (nvme0n1) 00:09:54.603 Could not set queue depth (nvme0n2) 00:09:54.603 Could not set queue depth (nvme0n3) 00:09:54.603 Could not set queue depth (nvme0n4) 00:09:54.863 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.863 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.863 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.863 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.863 fio-3.35 00:09:54.863 Starting 4 threads 00:09:56.245 00:09:56.245 job0: (groupid=0, jobs=1): err= 0: pid=2269585: Wed Nov 6 13:51:42 2024 00:09:56.245 read: IOPS=16, BW=67.7KiB/s (69.4kB/s)(68.0KiB/1004msec) 00:09:56.245 slat (nsec): min=25875, max=26289, avg=26061.65, stdev=146.67 00:09:56.245 clat (usec): min=1264, max=42981, avg=39606.74, stdev=9889.81 00:09:56.245 lat (usec): min=1290, max=43007, avg=39632.80, stdev=9889.85 00:09:56.245 clat percentiles (usec): 00:09:56.245 | 1.00th=[ 1270], 5.00th=[ 1270], 10.00th=[41157], 20.00th=[41681], 00:09:56.245 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:56.245 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:09:56.245 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:56.245 | 99.99th=[42730] 00:09:56.245 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:09:56.245 slat (nsec): min=9689, max=71183, avg=29677.30, stdev=9624.18 00:09:56.245 clat (usec): min=219, max=987, avg=601.33, stdev=119.77 00:09:56.245 lat (usec): min=229, max=1036, avg=631.00, stdev=123.86 00:09:56.245 clat percentiles (usec): 00:09:56.245 | 1.00th=[ 326], 5.00th=[ 392], 10.00th=[ 445], 20.00th=[ 494], 00:09:56.245 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 635], 00:09:56.245 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 734], 95.00th=[ 783], 00:09:56.245 | 99.00th=[ 865], 99.50th=[ 947], 99.90th=[ 988], 99.95th=[ 988], 00:09:56.245 | 99.99th=[ 988] 00:09:56.245 bw ( KiB/s): min= 4096, max= 4096, per=45.08%, avg=4096.00, stdev= 0.00, samples=1 00:09:56.245 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:56.245 lat (usec) : 250=0.19%, 500=20.98%, 750=67.67%, 1000=7.94% 00:09:56.245 lat (msec) : 2=0.19%, 50=3.02% 00:09:56.245 cpu : usr=1.00%, sys=1.30%, ctx=532, majf=0, minf=1 00:09:56.245 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.245 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.245 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.245 job1: (groupid=0, jobs=1): err= 0: pid=2269586: Wed Nov 6 13:51:42 2024 00:09:56.245 read: IOPS=20, BW=82.3KiB/s (84.2kB/s)(84.0KiB/1021msec) 00:09:56.245 slat (nsec): min=9016, max=27841, avg=23489.29, stdev=4810.37 00:09:56.245 clat (usec): min=522, max=42197, avg=37511.05, stdev=12244.59 00:09:56.245 lat (usec): min=549, max=42221, avg=37534.54, stdev=12243.29 00:09:56.245 clat percentiles (usec): 00:09:56.245 | 1.00th=[ 523], 5.00th=[ 898], 10.00th=[40633], 20.00th=[41157], 00:09:56.245 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:56.245 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:56.245 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:56.245 | 99.99th=[42206] 00:09:56.245 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:09:56.245 slat (nsec): min=9027, max=50160, avg=24748.55, stdev=9803.40 00:09:56.245 clat (usec): min=103, max=861, avg=422.72, stdev=122.84 00:09:56.245 lat (usec): min=123, max=882, avg=447.47, stdev=122.63 00:09:56.245 clat percentiles (usec): 00:09:56.245 | 1.00th=[ 161], 5.00th=[ 215], 10.00th=[ 281], 20.00th=[ 314], 00:09:56.245 | 30.00th=[ 343], 40.00th=[ 392], 50.00th=[ 429], 60.00th=[ 461], 00:09:56.245 | 70.00th=[ 490], 80.00th=[ 529], 90.00th=[ 586], 95.00th=[ 619], 00:09:56.245 | 99.00th=[ 717], 99.50th=[ 766], 99.90th=[ 865], 99.95th=[ 865], 00:09:56.245 | 99.99th=[ 865] 00:09:56.245 bw ( KiB/s): min= 4096, max= 4096, per=45.08%, avg=4096.00, stdev= 0.00, samples=1 00:09:56.245 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:56.245 lat (usec) : 250=7.13%, 500=62.66%, 750=25.70%, 1000=0.94% 00:09:56.245 lat (msec) : 50=3.56% 00:09:56.245 cpu : usr=0.78%, sys=1.18%, ctx=533, majf=0, minf=1 00:09:56.245 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.245 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.245 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.245 job2: (groupid=0, jobs=1): err= 0: pid=2269587: Wed Nov 6 13:51:42 2024 00:09:56.245 read: IOPS=335, BW=1343KiB/s (1375kB/s)(1344KiB/1001msec) 00:09:56.245 slat (nsec): min=6937, max=47762, avg=24058.18, stdev=6029.16 00:09:56.245 clat (usec): min=373, max=42223, avg=2268.08, stdev=7316.31 00:09:56.245 lat (usec): min=399, max=42250, avg=2292.14, stdev=7316.72 00:09:56.245 clat percentiles (usec): 00:09:56.245 | 1.00th=[ 482], 5.00th=[ 676], 10.00th=[ 734], 20.00th=[ 783], 00:09:56.245 | 30.00th=[ 840], 40.00th=[ 889], 50.00th=[ 914], 60.00th=[ 947], 00:09:56.245 | 70.00th=[ 971], 80.00th=[ 996], 90.00th=[ 1045], 95.00th=[ 1106], 00:09:56.245 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:56.245 | 99.99th=[42206] 00:09:56.245 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:56.245 slat (nsec): min=9638, max=54985, avg=29287.20, stdev=10031.22 00:09:56.245 clat (usec): min=112, max=736, avg=401.00, stdev=108.67 00:09:56.245 lat (usec): min=123, max=749, avg=430.29, stdev=111.09 00:09:56.245 clat percentiles (usec): 00:09:56.245 | 1.00th=[ 192], 5.00th=[ 223], 10.00th=[ 273], 20.00th=[ 310], 00:09:56.245 | 30.00th=[ 330], 40.00th=[ 355], 50.00th=[ 404], 60.00th=[ 433], 00:09:56.245 | 70.00th=[ 453], 80.00th=[ 494], 90.00th=[ 553], 95.00th=[ 586], 00:09:56.245 | 99.00th=[ 627], 99.50th=[ 668], 99.90th=[ 734], 99.95th=[ 734], 00:09:56.245 | 99.99th=[ 734] 00:09:56.245 bw ( KiB/s): min= 4096, max= 4096, per=45.08%, avg=4096.00, stdev= 0.00, samples=1 00:09:56.245 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:56.245 lat (usec) : 250=4.95%, 500=44.46%, 750=15.92%, 1000=27.59% 00:09:56.245 lat (msec) : 2=5.66%, 20=0.12%, 50=1.30% 00:09:56.245 cpu : usr=0.70%, sys=2.80%, ctx=849, majf=0, minf=1 00:09:56.245 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.245 issued rwts: total=336,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.245 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.245 job3: (groupid=0, jobs=1): err= 0: pid=2269588: Wed Nov 6 13:51:42 2024 00:09:56.245 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:56.245 slat (nsec): min=8758, max=42841, avg=27088.62, stdev=2901.45 00:09:56.245 clat (usec): min=501, max=1219, avg=962.18, stdev=66.67 00:09:56.245 lat (usec): min=528, max=1247, avg=989.27, stdev=67.33 00:09:56.245 clat percentiles (usec): 00:09:56.245 | 1.00th=[ 750], 5.00th=[ 832], 10.00th=[ 889], 20.00th=[ 922], 00:09:56.245 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 979], 00:09:56.245 | 70.00th=[ 996], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1037], 00:09:56.245 | 99.00th=[ 1090], 99.50th=[ 1205], 99.90th=[ 1221], 99.95th=[ 1221], 00:09:56.245 | 99.99th=[ 1221] 00:09:56.245 write: IOPS=782, BW=3129KiB/s (3204kB/s)(3132KiB/1001msec); 0 zone resets 00:09:56.245 slat (nsec): min=9141, max=65704, avg=29863.80, stdev=9410.98 00:09:56.245 clat (usec): min=191, max=1014, avg=587.68, stdev=112.02 00:09:56.245 lat (usec): min=200, max=1047, avg=617.54, stdev=116.45 00:09:56.245 clat percentiles (usec): 00:09:56.245 | 1.00th=[ 293], 5.00th=[ 400], 10.00th=[ 445], 20.00th=[ 494], 00:09:56.245 | 30.00th=[ 537], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 619], 00:09:56.245 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 758], 00:09:56.245 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 1012], 99.95th=[ 1012], 00:09:56.245 | 99.99th=[ 1012] 00:09:56.245 bw ( KiB/s): min= 4096, max= 4096, per=45.08%, avg=4096.00, stdev= 0.00, samples=1 00:09:56.245 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:56.245 lat (usec) : 250=0.31%, 500=12.12%, 750=44.48%, 1000=33.51% 00:09:56.245 lat (msec) : 2=9.58% 00:09:56.245 cpu : usr=1.90%, sys=5.80%, ctx=1295, majf=0, minf=2 00:09:56.245 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.245 issued rwts: total=512,783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.245 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.245 00:09:56.245 Run status group 0 (all jobs): 00:09:56.245 READ: bw=3471KiB/s (3554kB/s), 67.7KiB/s-2046KiB/s (69.4kB/s-2095kB/s), io=3544KiB (3629kB), run=1001-1021msec 00:09:56.245 WRITE: bw=9085KiB/s (9303kB/s), 2006KiB/s-3129KiB/s (2054kB/s-3204kB/s), io=9276KiB (9499kB), run=1001-1021msec 00:09:56.245 00:09:56.245 Disk stats (read/write): 00:09:56.245 nvme0n1: ios=65/512, merge=0/0, ticks=1016/287, in_queue=1303, util=97.09% 00:09:56.246 nvme0n2: ios=55/512, merge=0/0, ticks=664/213, in_queue=877, util=92.67% 00:09:56.246 nvme0n3: ios=198/512, merge=0/0, ticks=1581/190, in_queue=1771, util=97.37% 00:09:56.246 nvme0n4: ios=557/516, merge=0/0, ticks=621/232, in_queue=853, util=96.92% 00:09:56.246 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:56.246 [global] 00:09:56.246 thread=1 00:09:56.246 invalidate=1 00:09:56.246 rw=write 00:09:56.246 time_based=1 00:09:56.246 runtime=1 00:09:56.246 ioengine=libaio 00:09:56.246 direct=1 00:09:56.246 bs=4096 00:09:56.246 iodepth=128 00:09:56.246 norandommap=0 00:09:56.246 numjobs=1 00:09:56.246 00:09:56.246 verify_dump=1 00:09:56.246 verify_backlog=512 00:09:56.246 verify_state_save=0 00:09:56.246 do_verify=1 00:09:56.246 verify=crc32c-intel 00:09:56.246 [job0] 00:09:56.246 filename=/dev/nvme0n1 00:09:56.246 [job1] 00:09:56.246 filename=/dev/nvme0n2 00:09:56.246 [job2] 00:09:56.246 filename=/dev/nvme0n3 00:09:56.246 [job3] 00:09:56.246 filename=/dev/nvme0n4 00:09:56.246 Could not set queue depth (nvme0n1) 00:09:56.246 Could not set queue depth (nvme0n2) 00:09:56.246 Could not set queue depth (nvme0n3) 00:09:56.246 Could not set queue depth (nvme0n4) 00:09:56.505 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.505 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.505 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.505 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.505 fio-3.35 00:09:56.505 Starting 4 threads 00:09:57.888 00:09:57.888 job0: (groupid=0, jobs=1): err= 0: pid=2270115: Wed Nov 6 13:51:44 2024 00:09:57.888 read: IOPS=8025, BW=31.4MiB/s (32.9MB/s)(31.5MiB/1004msec) 00:09:57.888 slat (nsec): min=949, max=13422k, avg=64227.74, stdev=489076.83 00:09:57.888 clat (usec): min=1294, max=55755, avg=8043.30, stdev=4497.55 00:09:57.888 lat (usec): min=2814, max=55759, avg=8107.52, stdev=4548.08 00:09:57.888 clat percentiles (usec): 00:09:57.888 | 1.00th=[ 4228], 5.00th=[ 4883], 10.00th=[ 5407], 20.00th=[ 5800], 00:09:57.888 | 30.00th=[ 6390], 40.00th=[ 6849], 50.00th=[ 7177], 60.00th=[ 7570], 00:09:57.888 | 70.00th=[ 8094], 80.00th=[ 8979], 90.00th=[10421], 95.00th=[14484], 00:09:57.888 | 99.00th=[25035], 99.50th=[45351], 99.90th=[55313], 99.95th=[55837], 00:09:57.888 | 99.99th=[55837] 00:09:57.888 write: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec); 0 zone resets 00:09:57.888 slat (nsec): min=1550, max=10654k, avg=53568.63, stdev=389530.02 00:09:57.888 clat (usec): min=1299, max=55749, avg=7633.20, stdev=5076.78 00:09:57.888 lat (usec): min=1308, max=55751, avg=7686.77, stdev=5103.05 00:09:57.888 clat percentiles (usec): 00:09:57.888 | 1.00th=[ 2409], 5.00th=[ 3490], 10.00th=[ 4146], 20.00th=[ 5276], 00:09:57.888 | 30.00th=[ 5735], 40.00th=[ 5997], 50.00th=[ 6325], 60.00th=[ 6718], 00:09:57.888 | 70.00th=[ 7177], 80.00th=[ 8455], 90.00th=[12911], 95.00th=[15401], 00:09:57.888 | 99.00th=[31589], 99.50th=[37487], 99.90th=[51119], 99.95th=[51119], 00:09:57.888 | 99.99th=[55837] 00:09:57.888 bw ( KiB/s): min=28240, max=37296, per=33.78%, avg=32768.00, stdev=6403.56, samples=2 00:09:57.888 iops : min= 7060, max= 9324, avg=8192.00, stdev=1600.89, samples=2 00:09:57.888 lat (msec) : 2=0.15%, 4=4.92%, 10=80.58%, 20=12.21%, 50=1.91% 00:09:57.888 lat (msec) : 100=0.24% 00:09:57.888 cpu : usr=6.78%, sys=7.18%, ctx=587, majf=0, minf=1 00:09:57.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:57.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.888 issued rwts: total=8058,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.888 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.888 job1: (groupid=0, jobs=1): err= 0: pid=2270116: Wed Nov 6 13:51:44 2024 00:09:57.888 read: IOPS=5280, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1002msec) 00:09:57.888 slat (nsec): min=901, max=24364k, avg=108742.11, stdev=804902.72 00:09:57.888 clat (usec): min=1340, max=67525, avg=13865.41, stdev=11594.39 00:09:57.888 lat (usec): min=1702, max=67533, avg=13974.15, stdev=11657.97 00:09:57.888 clat percentiles (usec): 00:09:57.888 | 1.00th=[ 2769], 5.00th=[ 5473], 10.00th=[ 6718], 20.00th=[ 7767], 00:09:57.888 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9634], 00:09:57.888 | 70.00th=[10945], 80.00th=[19530], 90.00th=[30540], 95.00th=[36439], 00:09:57.888 | 99.00th=[64750], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:09:57.888 | 99.99th=[67634] 00:09:57.888 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:09:57.888 slat (nsec): min=1525, max=5837.1k, avg=69593.76, stdev=404992.25 00:09:57.888 clat (usec): min=1078, max=36789, avg=9543.04, stdev=6358.99 00:09:57.888 lat (usec): min=1087, max=36796, avg=9612.63, stdev=6402.83 00:09:57.888 clat percentiles (usec): 00:09:57.888 | 1.00th=[ 1516], 5.00th=[ 3621], 10.00th=[ 4621], 20.00th=[ 5932], 00:09:57.888 | 30.00th=[ 6456], 40.00th=[ 6980], 50.00th=[ 7504], 60.00th=[ 8094], 00:09:57.888 | 70.00th=[ 9110], 80.00th=[11863], 90.00th=[16909], 95.00th=[27132], 00:09:57.888 | 99.00th=[30802], 99.50th=[31065], 99.90th=[33162], 99.95th=[33162], 00:09:57.888 | 99.99th=[36963] 00:09:57.888 bw ( KiB/s): min=16384, max=28672, per=23.22%, avg=22528.00, stdev=8688.93, samples=2 00:09:57.888 iops : min= 4096, max= 7168, avg=5632.00, stdev=2172.23, samples=2 00:09:57.888 lat (msec) : 2=0.81%, 4=3.75%, 10=65.80%, 20=15.71%, 50=12.78% 00:09:57.888 lat (msec) : 100=1.15% 00:09:57.888 cpu : usr=3.70%, sys=5.00%, ctx=485, majf=0, minf=2 00:09:57.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:57.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.888 issued rwts: total=5291,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.889 job2: (groupid=0, jobs=1): err= 0: pid=2270117: Wed Nov 6 13:51:44 2024 00:09:57.889 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:09:57.889 slat (nsec): min=990, max=13646k, avg=98935.73, stdev=740173.77 00:09:57.889 clat (usec): min=1535, max=71991, avg=12287.35, stdev=8067.83 00:09:57.889 lat (usec): min=1554, max=71997, avg=12386.29, stdev=8161.71 00:09:57.889 clat percentiles (usec): 00:09:57.889 | 1.00th=[ 3621], 5.00th=[ 4817], 10.00th=[ 6390], 20.00th=[ 8094], 00:09:57.889 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[10683], 00:09:57.889 | 70.00th=[11731], 80.00th=[17957], 90.00th=[19530], 95.00th=[23200], 00:09:57.889 | 99.00th=[44303], 99.50th=[60556], 99.90th=[71828], 99.95th=[71828], 00:09:57.889 | 99.99th=[71828] 00:09:57.889 write: IOPS=4510, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1002msec); 0 zone resets 00:09:57.889 slat (nsec): min=1671, max=40799k, avg=119303.62, stdev=856415.49 00:09:57.889 clat (usec): min=342, max=75938, avg=15813.93, stdev=15902.34 00:09:57.889 lat (usec): min=377, max=75946, avg=15933.23, stdev=16006.72 00:09:57.889 clat percentiles (usec): 00:09:57.889 | 1.00th=[ 1467], 5.00th=[ 4047], 10.00th=[ 4948], 20.00th=[ 6325], 00:09:57.889 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[10814], 00:09:57.889 | 70.00th=[15270], 80.00th=[23200], 90.00th=[34866], 95.00th=[60031], 00:09:57.889 | 99.00th=[71828], 99.50th=[73925], 99.90th=[76022], 99.95th=[76022], 00:09:57.889 | 99.99th=[76022] 00:09:57.889 bw ( KiB/s): min=14664, max=20480, per=18.11%, avg=17572.00, stdev=4112.53, samples=2 00:09:57.889 iops : min= 3666, max= 5120, avg=4393.00, stdev=1028.13, samples=2 00:09:57.889 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.20% 00:09:57.889 lat (msec) : 2=0.63%, 4=2.94%, 10=53.34%, 20=26.87%, 50=11.88% 00:09:57.889 lat (msec) : 100=4.07% 00:09:57.889 cpu : usr=3.10%, sys=5.19%, ctx=372, majf=0, minf=1 00:09:57.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:57.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.889 issued rwts: total=4096,4520,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.889 job3: (groupid=0, jobs=1): err= 0: pid=2270118: Wed Nov 6 13:51:44 2024 00:09:57.889 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:09:57.889 slat (nsec): min=965, max=15282k, avg=89697.94, stdev=664154.84 00:09:57.889 clat (usec): min=4788, max=44666, avg=11235.67, stdev=5653.83 00:09:57.889 lat (usec): min=5051, max=44693, avg=11325.36, stdev=5714.53 00:09:57.889 clat percentiles (usec): 00:09:57.889 | 1.00th=[ 5669], 5.00th=[ 6587], 10.00th=[ 7439], 20.00th=[ 7701], 00:09:57.889 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[10421], 00:09:57.889 | 70.00th=[11469], 80.00th=[13042], 90.00th=[19530], 95.00th=[22676], 00:09:57.889 | 99.00th=[31851], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:57.889 | 99.99th=[44827] 00:09:57.889 write: IOPS=5980, BW=23.4MiB/s (24.5MB/s)(23.5MiB/1004msec); 0 zone resets 00:09:57.889 slat (nsec): min=1650, max=12447k, avg=78005.12, stdev=515590.19 00:09:57.889 clat (usec): min=548, max=40049, avg=10389.26, stdev=5721.16 00:09:57.889 lat (usec): min=3374, max=40081, avg=10467.27, stdev=5766.73 00:09:57.889 clat percentiles (usec): 00:09:57.889 | 1.00th=[ 3982], 5.00th=[ 5669], 10.00th=[ 7046], 20.00th=[ 7635], 00:09:57.889 | 30.00th=[ 7963], 40.00th=[ 8094], 50.00th=[ 8356], 60.00th=[ 9241], 00:09:57.889 | 70.00th=[ 9634], 80.00th=[10683], 90.00th=[19006], 95.00th=[26084], 00:09:57.889 | 99.00th=[33162], 99.50th=[33424], 99.90th=[34341], 99.95th=[35914], 00:09:57.889 | 99.99th=[40109] 00:09:57.889 bw ( KiB/s): min=22512, max=24496, per=24.23%, avg=23504.00, stdev=1402.90, samples=2 00:09:57.889 iops : min= 5628, max= 6124, avg=5876.00, stdev=350.72, samples=2 00:09:57.889 lat (usec) : 750=0.01% 00:09:57.889 lat (msec) : 4=0.53%, 10=65.46%, 20=24.30%, 50=9.69% 00:09:57.889 cpu : usr=3.99%, sys=5.18%, ctx=590, majf=0, minf=1 00:09:57.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:57.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.889 issued rwts: total=5632,6004,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.889 00:09:57.889 Run status group 0 (all jobs): 00:09:57.889 READ: bw=89.8MiB/s (94.1MB/s), 16.0MiB/s-31.4MiB/s (16.7MB/s-32.9MB/s), io=90.1MiB (94.5MB), run=1002-1004msec 00:09:57.889 WRITE: bw=94.7MiB/s (99.3MB/s), 17.6MiB/s-31.9MiB/s (18.5MB/s-33.4MB/s), io=95.1MiB (99.7MB), run=1002-1004msec 00:09:57.889 00:09:57.889 Disk stats (read/write): 00:09:57.889 nvme0n1: ios=6276/6656, merge=0/0, ticks=47076/47586, in_queue=94662, util=82.46% 00:09:57.889 nvme0n2: ios=4146/4252, merge=0/0, ticks=17464/15597, in_queue=33061, util=86.44% 00:09:57.889 nvme0n3: ios=3099/3303, merge=0/0, ticks=36683/51161, in_queue=87844, util=95.91% 00:09:57.889 nvme0n4: ios=5141/5143, merge=0/0, ticks=27629/20226, in_queue=47855, util=99.77% 00:09:57.889 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:57.889 [global] 00:09:57.889 thread=1 00:09:57.889 invalidate=1 00:09:57.889 rw=randwrite 00:09:57.889 time_based=1 00:09:57.889 runtime=1 00:09:57.889 ioengine=libaio 00:09:57.889 direct=1 00:09:57.889 bs=4096 00:09:57.889 iodepth=128 00:09:57.889 norandommap=0 00:09:57.889 numjobs=1 00:09:57.889 00:09:57.889 verify_dump=1 00:09:57.889 verify_backlog=512 00:09:57.889 verify_state_save=0 00:09:57.889 do_verify=1 00:09:57.889 verify=crc32c-intel 00:09:57.889 [job0] 00:09:57.889 filename=/dev/nvme0n1 00:09:57.889 [job1] 00:09:57.889 filename=/dev/nvme0n2 00:09:57.889 [job2] 00:09:57.889 filename=/dev/nvme0n3 00:09:57.889 [job3] 00:09:57.889 filename=/dev/nvme0n4 00:09:57.889 Could not set queue depth (nvme0n1) 00:09:57.889 Could not set queue depth (nvme0n2) 00:09:57.889 Could not set queue depth (nvme0n3) 00:09:57.889 Could not set queue depth (nvme0n4) 00:09:58.467 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.467 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.467 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.467 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.467 fio-3.35 00:09:58.467 Starting 4 threads 00:09:59.410 00:09:59.410 job0: (groupid=0, jobs=1): err= 0: pid=2270634: Wed Nov 6 13:51:45 2024 00:09:59.410 read: IOPS=7002, BW=27.4MiB/s (28.7MB/s)(27.5MiB/1005msec) 00:09:59.410 slat (nsec): min=944, max=14621k, avg=70409.91, stdev=434182.69 00:09:59.410 clat (usec): min=1672, max=28415, avg=8862.89, stdev=2971.54 00:09:59.410 lat (usec): min=4738, max=28453, avg=8933.30, stdev=2985.28 00:09:59.410 clat percentiles (usec): 00:09:59.410 | 1.00th=[ 5342], 5.00th=[ 6325], 10.00th=[ 6915], 20.00th=[ 7177], 00:09:59.410 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8586], 00:09:59.410 | 70.00th=[ 8979], 80.00th=[10028], 90.00th=[10683], 95.00th=[15139], 00:09:59.410 | 99.00th=[23987], 99.50th=[24773], 99.90th=[26870], 99.95th=[27132], 00:09:59.410 | 99.99th=[28443] 00:09:59.410 write: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec); 0 zone resets 00:09:59.410 slat (nsec): min=1567, max=11554k, avg=67223.42, stdev=367418.47 00:09:59.410 clat (usec): min=3997, max=81316, avg=9027.63, stdev=8322.34 00:09:59.410 lat (usec): min=4004, max=81332, avg=9094.86, stdev=8370.60 00:09:59.410 clat percentiles (usec): 00:09:59.410 | 1.00th=[ 5014], 5.00th=[ 6194], 10.00th=[ 6652], 20.00th=[ 6915], 00:09:59.410 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7439], 60.00th=[ 7701], 00:09:59.410 | 70.00th=[ 8160], 80.00th=[ 8586], 90.00th=[ 8979], 95.00th=[12518], 00:09:59.410 | 99.00th=[64226], 99.50th=[70779], 99.90th=[80217], 99.95th=[80217], 00:09:59.410 | 99.99th=[81265] 00:09:59.410 bw ( KiB/s): min=24576, max=32768, per=29.25%, avg=28672.00, stdev=5792.62, samples=2 00:09:59.410 iops : min= 6144, max= 8192, avg=7168.00, stdev=1448.15, samples=2 00:09:59.410 lat (msec) : 2=0.01%, 4=0.01%, 10=86.55%, 20=10.72%, 50=1.88% 00:09:59.410 lat (msec) : 100=0.84% 00:09:59.410 cpu : usr=3.88%, sys=4.28%, ctx=896, majf=0, minf=1 00:09:59.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:59.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.410 issued rwts: total=7038,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.410 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.410 job1: (groupid=0, jobs=1): err= 0: pid=2270635: Wed Nov 6 13:51:45 2024 00:09:59.410 read: IOPS=6425, BW=25.1MiB/s (26.3MB/s)(25.1MiB/1002msec) 00:09:59.410 slat (nsec): min=923, max=16041k, avg=75442.45, stdev=464611.42 00:09:59.410 clat (usec): min=1095, max=42345, avg=9494.16, stdev=4098.85 00:09:59.410 lat (usec): min=3349, max=42390, avg=9569.60, stdev=4137.24 00:09:59.410 clat percentiles (usec): 00:09:59.410 | 1.00th=[ 5800], 5.00th=[ 6783], 10.00th=[ 7439], 20.00th=[ 7832], 00:09:59.410 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8586], 00:09:59.410 | 70.00th=[ 8848], 80.00th=[ 9503], 90.00th=[12780], 95.00th=[14091], 00:09:59.410 | 99.00th=[30802], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:09:59.410 | 99.99th=[42206] 00:09:59.410 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:09:59.410 slat (nsec): min=1574, max=9278.0k, avg=73473.40, stdev=401160.43 00:09:59.410 clat (usec): min=4611, max=40007, avg=9761.14, stdev=5157.24 00:09:59.410 lat (usec): min=4616, max=40009, avg=9834.62, stdev=5195.89 00:09:59.410 clat percentiles (usec): 00:09:59.410 | 1.00th=[ 5604], 5.00th=[ 6456], 10.00th=[ 6718], 20.00th=[ 6980], 00:09:59.410 | 30.00th=[ 7308], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8094], 00:09:59.410 | 70.00th=[ 8455], 80.00th=[ 9896], 90.00th=[14615], 95.00th=[23462], 00:09:59.410 | 99.00th=[28443], 99.50th=[31589], 99.90th=[40109], 99.95th=[40109], 00:09:59.410 | 99.99th=[40109] 00:09:59.410 bw ( KiB/s): min=20480, max=32768, per=27.16%, avg=26624.00, stdev=8688.93, samples=2 00:09:59.410 iops : min= 5120, max= 8192, avg=6656.00, stdev=2172.23, samples=2 00:09:59.410 lat (msec) : 2=0.01%, 4=0.32%, 10=80.39%, 20=14.08%, 50=5.20% 00:09:59.410 cpu : usr=3.00%, sys=4.50%, ctx=806, majf=0, minf=1 00:09:59.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:59.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.410 issued rwts: total=6438,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.410 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.410 job2: (groupid=0, jobs=1): err= 0: pid=2270636: Wed Nov 6 13:51:45 2024 00:09:59.410 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:09:59.410 slat (nsec): min=984, max=54070k, avg=125686.43, stdev=1337597.15 00:09:59.410 clat (msec): min=5, max=145, avg=15.84, stdev=21.43 00:09:59.410 lat (msec): min=5, max=145, avg=15.96, stdev=21.55 00:09:59.410 clat percentiles (msec): 00:09:59.410 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 9], 00:09:59.410 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:09:59.410 | 70.00th=[ 11], 80.00th=[ 15], 90.00th=[ 20], 95.00th=[ 52], 00:09:59.410 | 99.00th=[ 126], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:09:59.410 | 99.99th=[ 146] 00:09:59.410 write: IOPS=4644, BW=18.1MiB/s (19.0MB/s)(18.2MiB/1003msec); 0 zone resets 00:09:59.410 slat (nsec): min=1607, max=17072k, avg=85555.89, stdev=585429.51 00:09:59.410 clat (usec): min=525, max=43583, avg=10947.16, stdev=5687.93 00:09:59.410 lat (usec): min=2070, max=43591, avg=11032.71, stdev=5717.74 00:09:59.410 clat percentiles (usec): 00:09:59.410 | 1.00th=[ 3458], 5.00th=[ 7570], 10.00th=[ 8029], 20.00th=[ 8455], 00:09:59.410 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9503], 00:09:59.410 | 70.00th=[10421], 80.00th=[11469], 90.00th=[13566], 95.00th=[25297], 00:09:59.410 | 99.00th=[38536], 99.50th=[40109], 99.90th=[43779], 99.95th=[43779], 00:09:59.410 | 99.99th=[43779] 00:09:59.410 bw ( KiB/s): min=10344, max=26520, per=18.81%, avg=18432.00, stdev=11438.16, samples=2 00:09:59.410 iops : min= 2586, max= 6630, avg=4608.00, stdev=2859.54, samples=2 00:09:59.410 lat (usec) : 750=0.01% 00:09:59.411 lat (msec) : 4=0.50%, 10=64.40%, 20=27.54%, 50=4.81%, 100=1.38% 00:09:59.411 lat (msec) : 250=1.36% 00:09:59.411 cpu : usr=2.69%, sys=3.59%, ctx=407, majf=0, minf=1 00:09:59.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:59.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.411 issued rwts: total=4608,4658,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.411 job3: (groupid=0, jobs=1): err= 0: pid=2270637: Wed Nov 6 13:51:45 2024 00:09:59.411 read: IOPS=5729, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1005msec) 00:09:59.411 slat (nsec): min=989, max=8977.1k, avg=84266.22, stdev=576145.46 00:09:59.411 clat (usec): min=1036, max=26407, avg=10146.37, stdev=3486.82 00:09:59.411 lat (usec): min=2778, max=26409, avg=10230.63, stdev=3523.30 00:09:59.411 clat percentiles (usec): 00:09:59.411 | 1.00th=[ 4817], 5.00th=[ 6783], 10.00th=[ 7635], 20.00th=[ 8160], 00:09:59.411 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9634], 00:09:59.411 | 70.00th=[10159], 80.00th=[11338], 90.00th=[14091], 95.00th=[17957], 00:09:59.411 | 99.00th=[24773], 99.50th=[25297], 99.90th=[26084], 99.95th=[26346], 00:09:59.411 | 99.99th=[26346] 00:09:59.411 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:09:59.411 slat (nsec): min=1624, max=7014.1k, avg=77325.95, stdev=384763.36 00:09:59.411 clat (usec): min=607, max=26403, avg=11229.80, stdev=4402.88 00:09:59.411 lat (usec): min=615, max=26406, avg=11307.13, stdev=4437.35 00:09:59.411 clat percentiles (usec): 00:09:59.411 | 1.00th=[ 2573], 5.00th=[ 4621], 10.00th=[ 6259], 20.00th=[ 7570], 00:09:59.411 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[10290], 60.00th=[13435], 00:09:59.411 | 70.00th=[13960], 80.00th=[15139], 90.00th=[16188], 95.00th=[19006], 00:09:59.411 | 99.00th=[21365], 99.50th=[22414], 99.90th=[22676], 99.95th=[25560], 00:09:59.411 | 99.99th=[26346] 00:09:59.411 bw ( KiB/s): min=24256, max=24880, per=25.07%, avg=24568.00, stdev=441.23, samples=2 00:09:59.411 iops : min= 6064, max= 6220, avg=6142.00, stdev=110.31, samples=2 00:09:59.411 lat (usec) : 750=0.03% 00:09:59.411 lat (msec) : 2=0.35%, 4=1.52%, 10=55.61%, 20=38.87%, 50=3.61% 00:09:59.411 cpu : usr=4.98%, sys=5.18%, ctx=632, majf=0, minf=2 00:09:59.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:59.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.411 issued rwts: total=5758,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.411 00:09:59.411 Run status group 0 (all jobs): 00:09:59.411 READ: bw=92.7MiB/s (97.2MB/s), 17.9MiB/s-27.4MiB/s (18.8MB/s-28.7MB/s), io=93.1MiB (97.7MB), run=1002-1005msec 00:09:59.411 WRITE: bw=95.7MiB/s (100MB/s), 18.1MiB/s-27.9MiB/s (19.0MB/s-29.2MB/s), io=96.2MiB (101MB), run=1002-1005msec 00:09:59.411 00:09:59.411 Disk stats (read/write): 00:09:59.411 nvme0n1: ios=6699/6703, merge=0/0, ticks=22459/19485, in_queue=41944, util=91.18% 00:09:59.411 nvme0n2: ios=5163/5471, merge=0/0, ticks=16853/17223, in_queue=34076, util=94.40% 00:09:59.411 nvme0n3: ios=3574/3584, merge=0/0, ticks=18414/13158, in_queue=31572, util=96.33% 00:09:59.411 nvme0n4: ios=4665/5120, merge=0/0, ticks=44632/57247, in_queue=101879, util=93.21% 00:09:59.671 13:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:59.671 13:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2270969 00:09:59.671 13:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:59.671 13:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:59.671 [global] 00:09:59.671 thread=1 00:09:59.671 invalidate=1 00:09:59.671 rw=read 00:09:59.671 time_based=1 00:09:59.671 runtime=10 00:09:59.671 ioengine=libaio 00:09:59.671 direct=1 00:09:59.671 bs=4096 00:09:59.671 iodepth=1 00:09:59.671 norandommap=1 00:09:59.671 numjobs=1 00:09:59.671 00:09:59.671 [job0] 00:09:59.671 filename=/dev/nvme0n1 00:09:59.671 [job1] 00:09:59.671 filename=/dev/nvme0n2 00:09:59.671 [job2] 00:09:59.671 filename=/dev/nvme0n3 00:09:59.671 [job3] 00:09:59.671 filename=/dev/nvme0n4 00:09:59.671 Could not set queue depth (nvme0n1) 00:09:59.671 Could not set queue depth (nvme0n2) 00:09:59.671 Could not set queue depth (nvme0n3) 00:09:59.671 Could not set queue depth (nvme0n4) 00:09:59.931 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.931 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.931 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.931 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.931 fio-3.35 00:09:59.931 Starting 4 threads 00:10:02.475 13:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:02.735 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=9162752, buflen=4096 00:10:02.735 fio: pid=2271167, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:02.735 13:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:02.995 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=7819264, buflen=4096 00:10:02.995 fio: pid=2271166, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:02.995 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.995 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:02.995 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11800576, buflen=4096 00:10:02.995 fio: pid=2271164, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:02.995 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.995 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:03.256 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=5791744, buflen=4096 00:10:03.256 fio: pid=2271165, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:03.256 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:03.256 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:03.256 00:10:03.256 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2271164: Wed Nov 6 13:51:49 2024 00:10:03.256 read: IOPS=971, BW=3885KiB/s (3979kB/s)(11.3MiB/2966msec) 00:10:03.256 slat (usec): min=6, max=25389, avg=45.04, stdev=598.88 00:10:03.256 clat (usec): min=146, max=43953, avg=971.06, stdev=3374.01 00:10:03.256 lat (usec): min=152, max=43985, avg=1016.11, stdev=3425.49 00:10:03.256 clat percentiles (usec): 00:10:03.256 | 1.00th=[ 239], 5.00th=[ 433], 10.00th=[ 510], 20.00th=[ 578], 00:10:03.256 | 30.00th=[ 635], 40.00th=[ 676], 50.00th=[ 717], 60.00th=[ 750], 00:10:03.256 | 70.00th=[ 791], 80.00th=[ 824], 90.00th=[ 865], 95.00th=[ 906], 00:10:03.256 | 99.00th=[ 1004], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:10:03.256 | 99.99th=[43779] 00:10:03.256 bw ( KiB/s): min= 95, max= 5456, per=34.41%, avg=3694.20, stdev=2467.90, samples=5 00:10:03.256 iops : min= 23, max= 1364, avg=923.40, stdev=617.25, samples=5 00:10:03.256 lat (usec) : 250=1.32%, 500=7.70%, 750=50.73%, 1000=39.10% 00:10:03.256 lat (msec) : 2=0.42%, 4=0.03%, 50=0.66% 00:10:03.256 cpu : usr=1.25%, sys=3.91%, ctx=2888, majf=0, minf=1 00:10:03.256 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.256 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.256 issued rwts: total=2882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.256 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.256 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2271165: Wed Nov 6 13:51:49 2024 00:10:03.256 read: IOPS=449, BW=1798KiB/s (1842kB/s)(5656KiB/3145msec) 00:10:03.256 slat (usec): min=6, max=33806, avg=93.08, stdev=1202.59 00:10:03.256 clat (usec): min=357, max=42765, avg=2108.89, stdev=7053.09 00:10:03.256 lat (usec): min=364, max=42790, avg=2202.02, stdev=7145.98 00:10:03.256 clat percentiles (usec): 00:10:03.256 | 1.00th=[ 429], 5.00th=[ 537], 10.00th=[ 594], 20.00th=[ 652], 00:10:03.256 | 30.00th=[ 734], 40.00th=[ 783], 50.00th=[ 840], 60.00th=[ 889], 00:10:03.256 | 70.00th=[ 971], 80.00th=[ 1106], 90.00th=[ 1172], 95.00th=[ 1221], 00:10:03.256 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:10:03.256 | 99.99th=[42730] 00:10:03.256 bw ( KiB/s): min= 584, max= 2592, per=16.56%, avg=1778.33, stdev=723.25, samples=6 00:10:03.256 iops : min= 146, max= 648, avg=444.50, stdev=180.72, samples=6 00:10:03.256 lat (usec) : 500=3.75%, 750=30.25%, 1000=37.17% 00:10:03.256 lat (msec) : 2=25.65%, 50=3.11% 00:10:03.256 cpu : usr=0.38%, sys=1.40%, ctx=1423, majf=0, minf=2 00:10:03.256 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.257 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.257 issued rwts: total=1415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.257 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2271166: Wed Nov 6 13:51:49 2024 00:10:03.257 read: IOPS=684, BW=2738KiB/s (2804kB/s)(7636KiB/2789msec) 00:10:03.257 slat (nsec): min=6416, max=59132, avg=25932.46, stdev=4024.70 00:10:03.257 clat (usec): min=292, max=42782, avg=1417.27, stdev=4272.66 00:10:03.257 lat (usec): min=319, max=42808, avg=1443.21, stdev=4272.73 00:10:03.257 clat percentiles (usec): 00:10:03.257 | 1.00th=[ 529], 5.00th=[ 758], 10.00th=[ 824], 20.00th=[ 914], 00:10:03.257 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:10:03.257 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1106], 00:10:03.257 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:10:03.257 | 99.99th=[42730] 00:10:03.257 bw ( KiB/s): min= 1976, max= 3616, per=25.49%, avg=2737.60, stdev=586.94, samples=5 00:10:03.257 iops : min= 494, max= 904, avg=684.40, stdev=146.73, samples=5 00:10:03.257 lat (usec) : 500=0.58%, 750=3.87%, 1000=49.37% 00:10:03.257 lat (msec) : 2=45.03%, 50=1.10% 00:10:03.257 cpu : usr=1.43%, sys=2.44%, ctx=1910, majf=0, minf=2 00:10:03.257 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.257 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.257 issued rwts: total=1910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.257 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2271167: Wed Nov 6 13:51:49 2024 00:10:03.257 read: IOPS=858, BW=3432KiB/s (3515kB/s)(8948KiB/2607msec) 00:10:03.257 slat (nsec): min=26719, max=63532, avg=27756.95, stdev=3037.84 00:10:03.257 clat (usec): min=653, max=42296, avg=1121.42, stdev=1220.39 00:10:03.257 lat (usec): min=681, max=42323, avg=1149.18, stdev=1220.38 00:10:03.257 clat percentiles (usec): 00:10:03.257 | 1.00th=[ 816], 5.00th=[ 914], 10.00th=[ 963], 20.00th=[ 1012], 00:10:03.257 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:10:03.257 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1237], 00:10:03.257 | 99.00th=[ 1303], 99.50th=[ 1336], 99.90th=[ 1401], 99.95th=[41157], 00:10:03.257 | 99.99th=[42206] 00:10:03.257 bw ( KiB/s): min= 3528, max= 3648, per=33.22%, avg=3566.40, stdev=48.13, samples=5 00:10:03.257 iops : min= 882, max= 912, avg=891.60, stdev=12.03, samples=5 00:10:03.257 lat (usec) : 750=0.18%, 1000=17.20% 00:10:03.257 lat (msec) : 2=82.48%, 50=0.09% 00:10:03.257 cpu : usr=1.73%, sys=3.38%, ctx=2238, majf=0, minf=2 00:10:03.257 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.257 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.257 issued rwts: total=2238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.257 00:10:03.257 Run status group 0 (all jobs): 00:10:03.257 READ: bw=10.5MiB/s (11.0MB/s), 1798KiB/s-3885KiB/s (1842kB/s-3979kB/s), io=33.0MiB (34.6MB), run=2607-3145msec 00:10:03.257 00:10:03.257 Disk stats (read/write): 00:10:03.257 nvme0n1: ios=2738/0, merge=0/0, ticks=2449/0, in_queue=2449, util=93.82% 00:10:03.257 nvme0n2: ios=1388/0, merge=0/0, ticks=2916/0, in_queue=2916, util=92.94% 00:10:03.257 nvme0n3: ios=1714/0, merge=0/0, ticks=2454/0, in_queue=2454, util=96.07% 00:10:03.257 nvme0n4: ios=2237/0, merge=0/0, ticks=2283/0, in_queue=2283, util=96.43% 00:10:03.521 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:03.521 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:03.521 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:03.521 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:03.782 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:03.782 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:04.065 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:04.065 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:04.065 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:04.065 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2270969 00:10:04.065 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:04.065 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:04.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.325 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:04.325 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:10:04.325 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:04.325 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.325 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:04.325 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.325 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:10:04.325 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:04.325 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:04.325 nvmf hotplug test: fio failed as expected 00:10:04.325 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:04.585 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:04.585 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:04.585 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:04.585 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:04.585 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:04.585 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:04.585 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:04.585 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:04.585 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:04.585 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:04.585 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:04.585 rmmod nvme_tcp 00:10:04.585 rmmod nvme_fabrics 00:10:04.586 rmmod nvme_keyring 00:10:04.586 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:04.586 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:04.586 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:04.586 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2267287 ']' 00:10:04.586 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2267287 00:10:04.586 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 2267287 ']' 00:10:04.586 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 2267287 00:10:04.586 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:10:04.586 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:04.586 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2267287 00:10:04.586 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:04.586 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:04.586 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2267287' 00:10:04.586 killing process with pid 2267287 00:10:04.586 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 2267287 00:10:04.586 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 2267287 00:10:04.846 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:04.846 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:04.846 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:04.846 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:04.846 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:04.846 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:04.846 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:04.846 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.846 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:04.846 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.846 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.846 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.757 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:06.757 00:10:06.757 real 0m29.567s 00:10:06.757 user 2m37.752s 00:10:06.757 sys 0m9.797s 00:10:06.757 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:06.757 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.757 ************************************ 00:10:06.757 END TEST nvmf_fio_target 00:10:06.757 ************************************ 00:10:06.757 13:51:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:06.757 13:51:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:06.757 13:51:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:06.757 13:51:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.017 ************************************ 00:10:07.017 START TEST nvmf_bdevio 00:10:07.017 ************************************ 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:07.017 * Looking for test storage... 00:10:07.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.017 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:07.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.018 --rc genhtml_branch_coverage=1 00:10:07.018 --rc genhtml_function_coverage=1 00:10:07.018 --rc genhtml_legend=1 00:10:07.018 --rc geninfo_all_blocks=1 00:10:07.018 --rc geninfo_unexecuted_blocks=1 00:10:07.018 00:10:07.018 ' 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:07.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.018 --rc genhtml_branch_coverage=1 00:10:07.018 --rc genhtml_function_coverage=1 00:10:07.018 --rc genhtml_legend=1 00:10:07.018 --rc geninfo_all_blocks=1 00:10:07.018 --rc geninfo_unexecuted_blocks=1 00:10:07.018 00:10:07.018 ' 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:07.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.018 --rc genhtml_branch_coverage=1 00:10:07.018 --rc genhtml_function_coverage=1 00:10:07.018 --rc genhtml_legend=1 00:10:07.018 --rc geninfo_all_blocks=1 00:10:07.018 --rc geninfo_unexecuted_blocks=1 00:10:07.018 00:10:07.018 ' 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:07.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.018 --rc genhtml_branch_coverage=1 00:10:07.018 --rc genhtml_function_coverage=1 00:10:07.018 --rc genhtml_legend=1 00:10:07.018 --rc geninfo_all_blocks=1 00:10:07.018 --rc geninfo_unexecuted_blocks=1 00:10:07.018 00:10:07.018 ' 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:07.018 13:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:15.155 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:15.155 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:15.155 Found net devices under 0000:31:00.0: cvl_0_0 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:15.155 Found net devices under 0000:31:00.1: cvl_0_1 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:15.155 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:15.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:10:15.156 00:10:15.156 --- 10.0.0.2 ping statistics --- 00:10:15.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.156 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:15.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:10:15.156 00:10:15.156 --- 10.0.0.1 ping statistics --- 00:10:15.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.156 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2276401 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2276401 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 2276401 ']' 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:15.156 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.156 [2024-11-06 13:52:00.995982] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:10:15.156 [2024-11-06 13:52:00.996052] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.156 [2024-11-06 13:52:01.100007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.156 [2024-11-06 13:52:01.150784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.156 [2024-11-06 13:52:01.150835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.156 [2024-11-06 13:52:01.150843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.156 [2024-11-06 13:52:01.150850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.156 [2024-11-06 13:52:01.150857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.156 [2024-11-06 13:52:01.153172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:15.156 [2024-11-06 13:52:01.153317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:15.156 [2024-11-06 13:52:01.153475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.156 [2024-11-06 13:52:01.153476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.726 [2024-11-06 13:52:01.874605] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.726 Malloc0 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.726 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.727 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.727 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.727 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.727 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.727 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.727 [2024-11-06 13:52:01.949885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.727 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.727 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:15.727 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:15.727 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:15.727 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:15.727 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:15.727 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:15.727 { 00:10:15.727 "params": { 00:10:15.727 "name": "Nvme$subsystem", 00:10:15.727 "trtype": "$TEST_TRANSPORT", 00:10:15.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.727 "adrfam": "ipv4", 00:10:15.727 "trsvcid": "$NVMF_PORT", 00:10:15.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.727 "hdgst": ${hdgst:-false}, 00:10:15.727 "ddgst": ${ddgst:-false} 00:10:15.727 }, 00:10:15.727 "method": "bdev_nvme_attach_controller" 00:10:15.727 } 00:10:15.727 EOF 00:10:15.727 )") 00:10:15.727 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:15.727 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:15.727 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:15.727 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:15.727 "params": { 00:10:15.727 "name": "Nvme1", 00:10:15.727 "trtype": "tcp", 00:10:15.727 "traddr": "10.0.0.2", 00:10:15.727 "adrfam": "ipv4", 00:10:15.727 "trsvcid": "4420", 00:10:15.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.727 "hdgst": false, 00:10:15.727 "ddgst": false 00:10:15.727 }, 00:10:15.727 "method": "bdev_nvme_attach_controller" 00:10:15.727 }' 00:10:15.987 [2024-11-06 13:52:02.014687] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:10:15.987 [2024-11-06 13:52:02.014763] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2276586 ] 00:10:15.987 [2024-11-06 13:52:02.108857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.987 [2024-11-06 13:52:02.165272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.987 [2024-11-06 13:52:02.165433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.987 [2024-11-06 13:52:02.165434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.247 I/O targets: 00:10:16.247 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:16.247 00:10:16.247 00:10:16.247 CUnit - A unit testing framework for C - Version 2.1-3 00:10:16.247 http://cunit.sourceforge.net/ 00:10:16.247 00:10:16.247 00:10:16.247 Suite: bdevio tests on: Nvme1n1 00:10:16.508 Test: blockdev write read block ...passed 00:10:16.508 Test: blockdev write zeroes read block ...passed 00:10:16.508 Test: blockdev write zeroes read no split ...passed 00:10:16.508 Test: blockdev write zeroes read split ...passed 00:10:16.508 Test: blockdev write zeroes read split partial ...passed 00:10:16.508 Test: blockdev reset ...[2024-11-06 13:52:02.667509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:16.508 [2024-11-06 13:52:02.667611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d401c0 (9): Bad file descriptor 00:10:16.769 [2024-11-06 13:52:02.810897] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:16.769 passed 00:10:16.769 Test: blockdev write read 8 blocks ...passed 00:10:16.769 Test: blockdev write read size > 128k ...passed 00:10:16.769 Test: blockdev write read invalid size ...passed 00:10:16.769 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:16.769 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:16.769 Test: blockdev write read max offset ...passed 00:10:16.769 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:16.769 Test: blockdev writev readv 8 blocks ...passed 00:10:16.769 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.030 Test: blockdev writev readv block ...passed 00:10:17.030 Test: blockdev writev readv size > 128k ...passed 00:10:17.030 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.030 Test: blockdev comparev and writev ...[2024-11-06 13:52:03.160977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.030 [2024-11-06 13:52:03.161027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:17.030 [2024-11-06 13:52:03.161044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.030 [2024-11-06 13:52:03.161053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:17.030 [2024-11-06 13:52:03.161541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.030 [2024-11-06 13:52:03.161554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:17.030 [2024-11-06 13:52:03.161569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.030 [2024-11-06 13:52:03.161579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:17.030 [2024-11-06 13:52:03.161917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.030 [2024-11-06 13:52:03.161930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:17.030 [2024-11-06 13:52:03.161944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.030 [2024-11-06 13:52:03.161953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:17.030 [2024-11-06 13:52:03.162366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.030 [2024-11-06 13:52:03.162377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:17.030 [2024-11-06 13:52:03.162391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.030 [2024-11-06 13:52:03.162399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:17.030 passed 00:10:17.030 Test: blockdev nvme passthru rw ...passed 00:10:17.030 Test: blockdev nvme passthru vendor specific ...[2024-11-06 13:52:03.248727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.030 [2024-11-06 13:52:03.248801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:17.030 [2024-11-06 13:52:03.249131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.030 [2024-11-06 13:52:03.249143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:17.030 [2024-11-06 13:52:03.249381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.030 [2024-11-06 13:52:03.249392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:17.030 [2024-11-06 13:52:03.249812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.030 [2024-11-06 13:52:03.249823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:17.030 passed 00:10:17.030 Test: blockdev nvme admin passthru ...passed 00:10:17.291 Test: blockdev copy ...passed 00:10:17.291 00:10:17.291 Run Summary: Type Total Ran Passed Failed Inactive 00:10:17.291 suites 1 1 n/a 0 0 00:10:17.291 tests 23 23 23 0 0 00:10:17.291 asserts 152 152 152 0 n/a 00:10:17.291 00:10:17.291 Elapsed time = 1.641 seconds 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.291 rmmod nvme_tcp 00:10:17.291 rmmod nvme_fabrics 00:10:17.291 rmmod nvme_keyring 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2276401 ']' 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2276401 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 2276401 ']' 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 2276401 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:17.291 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2276401 00:10:17.552 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:17.552 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:17.552 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2276401' 00:10:17.552 killing process with pid 2276401 00:10:17.552 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 2276401 00:10:17.552 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 2276401 00:10:17.552 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:17.552 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:17.552 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:17.552 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:17.552 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:17.552 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:17.552 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:17.552 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.553 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:17.553 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.553 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.553 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.099 13:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:20.099 00:10:20.099 real 0m12.719s 00:10:20.099 user 0m15.340s 00:10:20.099 sys 0m6.357s 00:10:20.099 13:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:20.099 13:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.099 ************************************ 00:10:20.099 END TEST nvmf_bdevio 00:10:20.099 ************************************ 00:10:20.099 13:52:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:20.099 00:10:20.099 real 5m7.720s 00:10:20.099 user 11m55.276s 00:10:20.099 sys 1m53.423s 00:10:20.099 13:52:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:20.099 13:52:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:20.099 ************************************ 00:10:20.099 END TEST nvmf_target_core 00:10:20.099 ************************************ 00:10:20.099 13:52:05 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:20.099 13:52:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:20.099 13:52:05 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:20.099 13:52:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:20.100 ************************************ 00:10:20.100 START TEST nvmf_target_extra 00:10:20.100 ************************************ 00:10:20.100 13:52:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:20.100 * Looking for test storage... 00:10:20.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:20.100 13:52:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:20.100 13:52:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:20.100 13:52:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:20.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.100 --rc genhtml_branch_coverage=1 00:10:20.100 --rc genhtml_function_coverage=1 00:10:20.100 --rc genhtml_legend=1 00:10:20.100 --rc geninfo_all_blocks=1 00:10:20.100 --rc geninfo_unexecuted_blocks=1 00:10:20.100 00:10:20.100 ' 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:20.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.100 --rc genhtml_branch_coverage=1 00:10:20.100 --rc genhtml_function_coverage=1 00:10:20.100 --rc genhtml_legend=1 00:10:20.100 --rc geninfo_all_blocks=1 00:10:20.100 --rc geninfo_unexecuted_blocks=1 00:10:20.100 00:10:20.100 ' 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:20.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.100 --rc genhtml_branch_coverage=1 00:10:20.100 --rc genhtml_function_coverage=1 00:10:20.100 --rc genhtml_legend=1 00:10:20.100 --rc geninfo_all_blocks=1 00:10:20.100 --rc geninfo_unexecuted_blocks=1 00:10:20.100 00:10:20.100 ' 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:20.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.100 --rc genhtml_branch_coverage=1 00:10:20.100 --rc genhtml_function_coverage=1 00:10:20.100 --rc genhtml_legend=1 00:10:20.100 --rc geninfo_all_blocks=1 00:10:20.100 --rc geninfo_unexecuted_blocks=1 00:10:20.100 00:10:20.100 ' 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:20.100 13:52:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:20.101 ************************************ 00:10:20.101 START TEST nvmf_example 00:10:20.101 ************************************ 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:20.101 * Looking for test storage... 00:10:20.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:20.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.101 --rc genhtml_branch_coverage=1 00:10:20.101 --rc genhtml_function_coverage=1 00:10:20.101 --rc genhtml_legend=1 00:10:20.101 --rc geninfo_all_blocks=1 00:10:20.101 --rc geninfo_unexecuted_blocks=1 00:10:20.101 00:10:20.101 ' 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:20.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.101 --rc genhtml_branch_coverage=1 00:10:20.101 --rc genhtml_function_coverage=1 00:10:20.101 --rc genhtml_legend=1 00:10:20.101 --rc geninfo_all_blocks=1 00:10:20.101 --rc geninfo_unexecuted_blocks=1 00:10:20.101 00:10:20.101 ' 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:20.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.101 --rc genhtml_branch_coverage=1 00:10:20.101 --rc genhtml_function_coverage=1 00:10:20.101 --rc genhtml_legend=1 00:10:20.101 --rc geninfo_all_blocks=1 00:10:20.101 --rc geninfo_unexecuted_blocks=1 00:10:20.101 00:10:20.101 ' 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:20.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.101 --rc genhtml_branch_coverage=1 00:10:20.101 --rc genhtml_function_coverage=1 00:10:20.101 --rc genhtml_legend=1 00:10:20.101 --rc geninfo_all_blocks=1 00:10:20.101 --rc geninfo_unexecuted_blocks=1 00:10:20.101 00:10:20.101 ' 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.101 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:20.363 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.364 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:20.364 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:20.364 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.364 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:20.364 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:20.364 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:20.364 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.364 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.364 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.364 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:20.364 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:20.364 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:20.364 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:28.509 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:28.509 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:28.509 Found net devices under 0000:31:00.0: cvl_0_0 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:28.509 Found net devices under 0000:31:00.1: cvl_0_1 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:28.509 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:28.510 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:28.510 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:28.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:10:28.510 00:10:28.510 --- 10.0.0.2 ping statistics --- 00:10:28.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.510 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:10:28.510 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:28.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:10:28.510 00:10:28.510 --- 10.0.0.1 ping statistics --- 00:10:28.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.510 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:10:28.510 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.510 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:28.510 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:28.510 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.510 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:28.510 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:28.510 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.510 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:28.510 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:28.510 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:28.510 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:28.510 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:28.510 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.510 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:28.510 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:28.510 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2281346 00:10:28.510 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:28.510 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:28.510 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2281346 00:10:28.510 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 2281346 ']' 00:10:28.510 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.510 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:28.510 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.510 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:28.510 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.771 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:28.771 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:10:28.771 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:28.771 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:28.771 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.772 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:28.772 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.772 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.772 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.772 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:28.772 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.772 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.772 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.772 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:28.772 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:28.772 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.772 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.772 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.772 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:28.772 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:28.772 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.772 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.772 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.772 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.772 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.772 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.772 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.772 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:28.772 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:41.096 Initializing NVMe Controllers 00:10:41.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:41.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:41.096 Initialization complete. Launching workers. 00:10:41.096 ======================================================== 00:10:41.096 Latency(us) 00:10:41.096 Device Information : IOPS MiB/s Average min max 00:10:41.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18449.10 72.07 3468.74 625.21 15549.67 00:10:41.096 ======================================================== 00:10:41.096 Total : 18449.10 72.07 3468.74 625.21 15549.67 00:10:41.096 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.096 rmmod nvme_tcp 00:10:41.096 rmmod nvme_fabrics 00:10:41.096 rmmod nvme_keyring 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2281346 ']' 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2281346 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 2281346 ']' 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 2281346 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2281346 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2281346' 00:10:41.096 killing process with pid 2281346 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 2281346 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 2281346 00:10:41.096 nvmf threads initialize successfully 00:10:41.096 bdev subsystem init successfully 00:10:41.096 created a nvmf target service 00:10:41.096 create targets's poll groups done 00:10:41.096 all subsystems of target started 00:10:41.096 nvmf target is running 00:10:41.096 all subsystems of target stopped 00:10:41.096 destroy targets's poll groups done 00:10:41.096 destroyed the nvmf target service 00:10:41.096 bdev subsystem finish successfully 00:10:41.096 nvmf threads destroy successfully 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.096 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.358 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:41.358 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:41.358 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:41.358 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.358 00:10:41.358 real 0m21.401s 00:10:41.358 user 0m46.265s 00:10:41.358 sys 0m7.033s 00:10:41.358 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:41.358 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.358 ************************************ 00:10:41.358 END TEST nvmf_example 00:10:41.358 ************************************ 00:10:41.358 13:52:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:41.358 13:52:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:41.358 13:52:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:41.358 13:52:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:41.621 ************************************ 00:10:41.621 START TEST nvmf_filesystem 00:10:41.621 ************************************ 00:10:41.621 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:41.621 * Looking for test storage... 00:10:41.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.621 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:41.621 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:41.621 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:41.621 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:41.621 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.621 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.621 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.621 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.621 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:41.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.622 --rc genhtml_branch_coverage=1 00:10:41.622 --rc genhtml_function_coverage=1 00:10:41.622 --rc genhtml_legend=1 00:10:41.622 --rc geninfo_all_blocks=1 00:10:41.622 --rc geninfo_unexecuted_blocks=1 00:10:41.622 00:10:41.622 ' 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:41.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.622 --rc genhtml_branch_coverage=1 00:10:41.622 --rc genhtml_function_coverage=1 00:10:41.622 --rc genhtml_legend=1 00:10:41.622 --rc geninfo_all_blocks=1 00:10:41.622 --rc geninfo_unexecuted_blocks=1 00:10:41.622 00:10:41.622 ' 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:41.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.622 --rc genhtml_branch_coverage=1 00:10:41.622 --rc genhtml_function_coverage=1 00:10:41.622 --rc genhtml_legend=1 00:10:41.622 --rc geninfo_all_blocks=1 00:10:41.622 --rc geninfo_unexecuted_blocks=1 00:10:41.622 00:10:41.622 ' 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:41.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.622 --rc genhtml_branch_coverage=1 00:10:41.622 --rc genhtml_function_coverage=1 00:10:41.622 --rc genhtml_legend=1 00:10:41.622 --rc geninfo_all_blocks=1 00:10:41.622 --rc geninfo_unexecuted_blocks=1 00:10:41.622 00:10:41.622 ' 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:41.622 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:41.623 #define SPDK_CONFIG_H 00:10:41.623 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:41.623 #define SPDK_CONFIG_APPS 1 00:10:41.623 #define SPDK_CONFIG_ARCH native 00:10:41.623 #undef SPDK_CONFIG_ASAN 00:10:41.623 #undef SPDK_CONFIG_AVAHI 00:10:41.623 #undef SPDK_CONFIG_CET 00:10:41.623 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:41.623 #define SPDK_CONFIG_COVERAGE 1 00:10:41.623 #define SPDK_CONFIG_CROSS_PREFIX 00:10:41.623 #undef SPDK_CONFIG_CRYPTO 00:10:41.623 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:41.623 #undef SPDK_CONFIG_CUSTOMOCF 00:10:41.623 #undef SPDK_CONFIG_DAOS 00:10:41.623 #define SPDK_CONFIG_DAOS_DIR 00:10:41.623 #define SPDK_CONFIG_DEBUG 1 00:10:41.623 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:41.623 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:41.623 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:41.623 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:41.623 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:41.623 #undef SPDK_CONFIG_DPDK_UADK 00:10:41.623 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:41.623 #define SPDK_CONFIG_EXAMPLES 1 00:10:41.623 #undef SPDK_CONFIG_FC 00:10:41.623 #define SPDK_CONFIG_FC_PATH 00:10:41.623 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:41.623 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:41.623 #define SPDK_CONFIG_FSDEV 1 00:10:41.623 #undef SPDK_CONFIG_FUSE 00:10:41.623 #undef SPDK_CONFIG_FUZZER 00:10:41.623 #define SPDK_CONFIG_FUZZER_LIB 00:10:41.623 #undef SPDK_CONFIG_GOLANG 00:10:41.623 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:41.623 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:41.623 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:41.623 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:41.623 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:41.623 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:41.623 #undef SPDK_CONFIG_HAVE_LZ4 00:10:41.623 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:41.623 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:41.623 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:41.623 #define SPDK_CONFIG_IDXD 1 00:10:41.623 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:41.623 #undef SPDK_CONFIG_IPSEC_MB 00:10:41.623 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:41.623 #define SPDK_CONFIG_ISAL 1 00:10:41.623 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:41.623 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:41.623 #define SPDK_CONFIG_LIBDIR 00:10:41.623 #undef SPDK_CONFIG_LTO 00:10:41.623 #define SPDK_CONFIG_MAX_LCORES 128 00:10:41.623 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:41.623 #define SPDK_CONFIG_NVME_CUSE 1 00:10:41.623 #undef SPDK_CONFIG_OCF 00:10:41.623 #define SPDK_CONFIG_OCF_PATH 00:10:41.623 #define SPDK_CONFIG_OPENSSL_PATH 00:10:41.623 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:41.623 #define SPDK_CONFIG_PGO_DIR 00:10:41.623 #undef SPDK_CONFIG_PGO_USE 00:10:41.623 #define SPDK_CONFIG_PREFIX /usr/local 00:10:41.623 #undef SPDK_CONFIG_RAID5F 00:10:41.623 #undef SPDK_CONFIG_RBD 00:10:41.623 #define SPDK_CONFIG_RDMA 1 00:10:41.623 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:41.623 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:41.623 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:41.623 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:41.623 #define SPDK_CONFIG_SHARED 1 00:10:41.623 #undef SPDK_CONFIG_SMA 00:10:41.623 #define SPDK_CONFIG_TESTS 1 00:10:41.623 #undef SPDK_CONFIG_TSAN 00:10:41.623 #define SPDK_CONFIG_UBLK 1 00:10:41.623 #define SPDK_CONFIG_UBSAN 1 00:10:41.623 #undef SPDK_CONFIG_UNIT_TESTS 00:10:41.623 #undef SPDK_CONFIG_URING 00:10:41.623 #define SPDK_CONFIG_URING_PATH 00:10:41.623 #undef SPDK_CONFIG_URING_ZNS 00:10:41.623 #undef SPDK_CONFIG_USDT 00:10:41.623 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:41.623 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:41.623 #define SPDK_CONFIG_VFIO_USER 1 00:10:41.623 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:41.623 #define SPDK_CONFIG_VHOST 1 00:10:41.623 #define SPDK_CONFIG_VIRTIO 1 00:10:41.623 #undef SPDK_CONFIG_VTUNE 00:10:41.623 #define SPDK_CONFIG_VTUNE_DIR 00:10:41.623 #define SPDK_CONFIG_WERROR 1 00:10:41.623 #define SPDK_CONFIG_WPDK_DIR 00:10:41.623 #undef SPDK_CONFIG_XNVME 00:10:41.623 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.623 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.624 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.624 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.624 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.624 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:41.624 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.624 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:41.624 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:41.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:41.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2284140 ]] 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2284140 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.VTvTH0 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.VTvTH0/tests/target /tmp/spdk.VTvTH0 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=434749440 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4849680384 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=123287891968 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356517376 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6068625408 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668225536 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678256640 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847934976 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871306752 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23371776 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=387072 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=116736 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677761024 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678260736 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=499712 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:41.890 * Looking for test storage... 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.890 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:41.890 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=123287891968 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8283217920 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:41.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.891 --rc genhtml_branch_coverage=1 00:10:41.891 --rc genhtml_function_coverage=1 00:10:41.891 --rc genhtml_legend=1 00:10:41.891 --rc geninfo_all_blocks=1 00:10:41.891 --rc geninfo_unexecuted_blocks=1 00:10:41.891 00:10:41.891 ' 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:41.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.891 --rc genhtml_branch_coverage=1 00:10:41.891 --rc genhtml_function_coverage=1 00:10:41.891 --rc genhtml_legend=1 00:10:41.891 --rc geninfo_all_blocks=1 00:10:41.891 --rc geninfo_unexecuted_blocks=1 00:10:41.891 00:10:41.891 ' 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:41.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.891 --rc genhtml_branch_coverage=1 00:10:41.891 --rc genhtml_function_coverage=1 00:10:41.891 --rc genhtml_legend=1 00:10:41.891 --rc geninfo_all_blocks=1 00:10:41.891 --rc geninfo_unexecuted_blocks=1 00:10:41.891 00:10:41.891 ' 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:41.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.891 --rc genhtml_branch_coverage=1 00:10:41.891 --rc genhtml_function_coverage=1 00:10:41.891 --rc genhtml_legend=1 00:10:41.891 --rc geninfo_all_blocks=1 00:10:41.891 --rc geninfo_unexecuted_blocks=1 00:10:41.891 00:10:41.891 ' 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.891 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:41.892 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:50.035 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:50.035 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:50.035 Found net devices under 0000:31:00.0: cvl_0_0 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.035 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:50.035 Found net devices under 0000:31:00.1: cvl_0_1 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:50.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:10:50.036 00:10:50.036 --- 10.0.0.2 ping statistics --- 00:10:50.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.036 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:10:50.036 00:10:50.036 --- 10.0.0.1 ping statistics --- 00:10:50.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.036 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.036 ************************************ 00:10:50.036 START TEST nvmf_filesystem_no_in_capsule 00:10:50.036 ************************************ 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2287809 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2287809 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 2287809 ']' 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:50.036 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.036 [2024-11-06 13:52:35.887720] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:10:50.036 [2024-11-06 13:52:35.887792] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.036 [2024-11-06 13:52:35.990666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.036 [2024-11-06 13:52:36.044439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.036 [2024-11-06 13:52:36.044497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.036 [2024-11-06 13:52:36.044506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.036 [2024-11-06 13:52:36.044513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.036 [2024-11-06 13:52:36.044519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.036 [2024-11-06 13:52:36.046631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.036 [2024-11-06 13:52:36.046892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.036 [2024-11-06 13:52:36.047172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.036 [2024-11-06 13:52:36.047178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.607 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:50.607 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:50.607 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:50.607 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:50.607 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.607 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.607 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:50.607 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:50.607 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.607 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.607 [2024-11-06 13:52:36.773694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.607 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.607 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:50.607 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.607 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.869 Malloc1 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.869 [2024-11-06 13:52:36.935953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:50.869 { 00:10:50.869 "name": "Malloc1", 00:10:50.869 "aliases": [ 00:10:50.869 "88b12246-a251-40de-9cdc-91173c3b429f" 00:10:50.869 ], 00:10:50.869 "product_name": "Malloc disk", 00:10:50.869 "block_size": 512, 00:10:50.869 "num_blocks": 1048576, 00:10:50.869 "uuid": "88b12246-a251-40de-9cdc-91173c3b429f", 00:10:50.869 "assigned_rate_limits": { 00:10:50.869 "rw_ios_per_sec": 0, 00:10:50.869 "rw_mbytes_per_sec": 0, 00:10:50.869 "r_mbytes_per_sec": 0, 00:10:50.869 "w_mbytes_per_sec": 0 00:10:50.869 }, 00:10:50.869 "claimed": true, 00:10:50.869 "claim_type": "exclusive_write", 00:10:50.869 "zoned": false, 00:10:50.869 "supported_io_types": { 00:10:50.869 "read": true, 00:10:50.869 "write": true, 00:10:50.869 "unmap": true, 00:10:50.869 "flush": true, 00:10:50.869 "reset": true, 00:10:50.869 "nvme_admin": false, 00:10:50.869 "nvme_io": false, 00:10:50.869 "nvme_io_md": false, 00:10:50.869 "write_zeroes": true, 00:10:50.869 "zcopy": true, 00:10:50.869 "get_zone_info": false, 00:10:50.869 "zone_management": false, 00:10:50.869 "zone_append": false, 00:10:50.869 "compare": false, 00:10:50.869 "compare_and_write": false, 00:10:50.869 "abort": true, 00:10:50.869 "seek_hole": false, 00:10:50.869 "seek_data": false, 00:10:50.869 "copy": true, 00:10:50.869 "nvme_iov_md": false 00:10:50.869 }, 00:10:50.869 "memory_domains": [ 00:10:50.869 { 00:10:50.869 "dma_device_id": "system", 00:10:50.869 "dma_device_type": 1 00:10:50.869 }, 00:10:50.869 { 00:10:50.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.869 "dma_device_type": 2 00:10:50.869 } 00:10:50.869 ], 00:10:50.869 "driver_specific": {} 00:10:50.869 } 00:10:50.869 ]' 00:10:50.869 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:50.869 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:50.869 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:50.869 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:50.869 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:50.869 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:50.869 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:50.869 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:52.782 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:52.782 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:52.782 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:52.782 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:52.782 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:54.692 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:54.692 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:54.692 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.692 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:54.692 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.692 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:54.692 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:54.692 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:54.692 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:54.692 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:54.692 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:54.692 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:54.692 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:54.692 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:54.692 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:54.692 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:54.692 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:54.692 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:55.631 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:56.571 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:56.571 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:56.571 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:56.571 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:56.571 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.571 ************************************ 00:10:56.571 START TEST filesystem_ext4 00:10:56.571 ************************************ 00:10:56.571 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:56.571 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:56.571 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:56.571 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:56.571 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:56.571 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:56.571 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:56.571 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:56.571 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:56.571 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:56.571 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:56.571 mke2fs 1.47.0 (5-Feb-2023) 00:10:56.571 Discarding device blocks: 0/522240 done 00:10:56.571 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:56.571 Filesystem UUID: 5202a1f1-db1a-47f1-a2fd-0acaa56da52b 00:10:56.571 Superblock backups stored on blocks: 00:10:56.571 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:56.571 00:10:56.571 Allocating group tables: 0/64 done 00:10:56.571 Writing inode tables: 0/64 done 00:10:56.571 Creating journal (8192 blocks): done 00:10:56.830 Writing superblocks and filesystem accounting information: 0/64 done 00:10:56.830 00:10:56.831 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:56.831 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:03.406 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:03.406 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:03.406 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:03.406 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:03.406 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:03.406 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:03.406 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2287809 00:11:03.406 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:03.406 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:03.406 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:03.406 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:03.406 00:11:03.406 real 0m6.390s 00:11:03.406 user 0m0.035s 00:11:03.406 sys 0m0.073s 00:11:03.406 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:03.406 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:03.406 ************************************ 00:11:03.406 END TEST filesystem_ext4 00:11:03.406 ************************************ 00:11:03.406 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:03.406 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:03.406 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:03.406 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.406 ************************************ 00:11:03.406 START TEST filesystem_btrfs 00:11:03.406 ************************************ 00:11:03.406 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:03.406 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:03.406 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:03.406 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:03.406 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:03.406 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:03.406 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:03.406 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:03.406 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:03.406 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:03.406 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:03.406 btrfs-progs v6.8.1 00:11:03.406 See https://btrfs.readthedocs.io for more information. 00:11:03.406 00:11:03.406 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:03.406 NOTE: several default settings have changed in version 5.15, please make sure 00:11:03.406 this does not affect your deployments: 00:11:03.406 - DUP for metadata (-m dup) 00:11:03.406 - enabled no-holes (-O no-holes) 00:11:03.406 - enabled free-space-tree (-R free-space-tree) 00:11:03.406 00:11:03.406 Label: (null) 00:11:03.406 UUID: 06868b54-5120-4e23-b49c-1644c51633d2 00:11:03.406 Node size: 16384 00:11:03.406 Sector size: 4096 (CPU page size: 4096) 00:11:03.406 Filesystem size: 510.00MiB 00:11:03.406 Block group profiles: 00:11:03.406 Data: single 8.00MiB 00:11:03.406 Metadata: DUP 32.00MiB 00:11:03.406 System: DUP 8.00MiB 00:11:03.407 SSD detected: yes 00:11:03.407 Zoned device: no 00:11:03.407 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:03.407 Checksum: crc32c 00:11:03.407 Number of devices: 1 00:11:03.407 Devices: 00:11:03.407 ID SIZE PATH 00:11:03.407 1 510.00MiB /dev/nvme0n1p1 00:11:03.407 00:11:03.407 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:03.407 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2287809 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:04.346 00:11:04.346 real 0m1.373s 00:11:04.346 user 0m0.028s 00:11:04.346 sys 0m0.119s 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:04.346 ************************************ 00:11:04.346 END TEST filesystem_btrfs 00:11:04.346 ************************************ 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.346 ************************************ 00:11:04.346 START TEST filesystem_xfs 00:11:04.346 ************************************ 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:04.346 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:04.346 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:04.346 = sectsz=512 attr=2, projid32bit=1 00:11:04.346 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:04.346 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:04.346 data = bsize=4096 blocks=130560, imaxpct=25 00:11:04.346 = sunit=0 swidth=0 blks 00:11:04.346 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:04.346 log =internal log bsize=4096 blocks=16384, version=2 00:11:04.346 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:04.346 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:05.285 Discarding blocks...Done. 00:11:05.285 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:05.285 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:07.826 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:07.826 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:07.826 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:07.826 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:07.826 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:07.826 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:07.826 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2287809 00:11:07.826 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:07.826 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:07.826 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:07.826 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:07.826 00:11:07.826 real 0m3.373s 00:11:07.826 user 0m0.029s 00:11:07.826 sys 0m0.074s 00:11:07.826 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:07.826 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:07.826 ************************************ 00:11:07.826 END TEST filesystem_xfs 00:11:07.826 ************************************ 00:11:07.826 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:08.085 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2287809 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 2287809 ']' 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 2287809 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2287809 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2287809' 00:11:08.654 killing process with pid 2287809 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 2287809 00:11:08.654 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 2287809 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:08.916 00:11:08.916 real 0m19.200s 00:11:08.916 user 1m15.830s 00:11:08.916 sys 0m1.451s 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.916 ************************************ 00:11:08.916 END TEST nvmf_filesystem_no_in_capsule 00:11:08.916 ************************************ 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:08.916 ************************************ 00:11:08.916 START TEST nvmf_filesystem_in_capsule 00:11:08.916 ************************************ 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2291727 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2291727 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 2291727 ']' 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:08.916 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.916 [2024-11-06 13:52:55.165618] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:11:08.916 [2024-11-06 13:52:55.165666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.176 [2024-11-06 13:52:55.257617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.176 [2024-11-06 13:52:55.287622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.177 [2024-11-06 13:52:55.287651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.177 [2024-11-06 13:52:55.287658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.177 [2024-11-06 13:52:55.287662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.177 [2024-11-06 13:52:55.287666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.177 [2024-11-06 13:52:55.289033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.177 [2024-11-06 13:52:55.289181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.177 [2024-11-06 13:52:55.289301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.177 [2024-11-06 13:52:55.289303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.747 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:09.747 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:09.747 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:09.747 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:09.747 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.747 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.747 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:09.747 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:09.747 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.747 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.747 [2024-11-06 13:52:56.015553] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.747 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.747 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:09.747 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.747 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.007 Malloc1 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.007 [2024-11-06 13:52:56.153558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:10.007 { 00:11:10.007 "name": "Malloc1", 00:11:10.007 "aliases": [ 00:11:10.007 "69ffac0a-64ae-445a-a8a4-f5f5ef8a7690" 00:11:10.007 ], 00:11:10.007 "product_name": "Malloc disk", 00:11:10.007 "block_size": 512, 00:11:10.007 "num_blocks": 1048576, 00:11:10.007 "uuid": "69ffac0a-64ae-445a-a8a4-f5f5ef8a7690", 00:11:10.007 "assigned_rate_limits": { 00:11:10.007 "rw_ios_per_sec": 0, 00:11:10.007 "rw_mbytes_per_sec": 0, 00:11:10.007 "r_mbytes_per_sec": 0, 00:11:10.007 "w_mbytes_per_sec": 0 00:11:10.007 }, 00:11:10.007 "claimed": true, 00:11:10.007 "claim_type": "exclusive_write", 00:11:10.007 "zoned": false, 00:11:10.007 "supported_io_types": { 00:11:10.007 "read": true, 00:11:10.007 "write": true, 00:11:10.007 "unmap": true, 00:11:10.007 "flush": true, 00:11:10.007 "reset": true, 00:11:10.007 "nvme_admin": false, 00:11:10.007 "nvme_io": false, 00:11:10.007 "nvme_io_md": false, 00:11:10.007 "write_zeroes": true, 00:11:10.007 "zcopy": true, 00:11:10.007 "get_zone_info": false, 00:11:10.007 "zone_management": false, 00:11:10.007 "zone_append": false, 00:11:10.007 "compare": false, 00:11:10.007 "compare_and_write": false, 00:11:10.007 "abort": true, 00:11:10.007 "seek_hole": false, 00:11:10.007 "seek_data": false, 00:11:10.007 "copy": true, 00:11:10.007 "nvme_iov_md": false 00:11:10.007 }, 00:11:10.007 "memory_domains": [ 00:11:10.007 { 00:11:10.007 "dma_device_id": "system", 00:11:10.007 "dma_device_type": 1 00:11:10.007 }, 00:11:10.007 { 00:11:10.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.007 "dma_device_type": 2 00:11:10.007 } 00:11:10.007 ], 00:11:10.007 "driver_specific": {} 00:11:10.007 } 00:11:10.007 ]' 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:10.007 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:11.920 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:11.920 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:11.920 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.920 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:11.920 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:13.830 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:13.830 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:13.830 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.830 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:13.830 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.830 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:13.830 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:13.830 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:13.830 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:13.830 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:13.830 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:13.830 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:13.830 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:13.830 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:13.830 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:13.830 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:13.830 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:14.089 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:14.089 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:15.472 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:15.472 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:15.472 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:15.472 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:15.472 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.472 ************************************ 00:11:15.472 START TEST filesystem_in_capsule_ext4 00:11:15.472 ************************************ 00:11:15.472 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:15.472 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:15.472 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:15.472 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:15.472 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:15.472 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:15.472 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:15.472 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:15.472 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:15.472 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:15.472 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:15.472 mke2fs 1.47.0 (5-Feb-2023) 00:11:15.472 Discarding device blocks: 0/522240 done 00:11:15.472 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:15.472 Filesystem UUID: 4a8764e4-cce0-48b2-aaf6-dbafa3be7290 00:11:15.472 Superblock backups stored on blocks: 00:11:15.472 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:15.472 00:11:15.472 Allocating group tables: 0/64 done 00:11:15.472 Writing inode tables: 0/64 done 00:11:18.014 Creating journal (8192 blocks): done 00:11:20.536 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:11:20.536 00:11:20.536 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:20.536 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:27.140 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:27.140 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:27.140 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:27.140 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:27.140 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:27.140 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:27.140 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2291727 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:27.141 00:11:27.141 real 0m11.032s 00:11:27.141 user 0m0.034s 00:11:27.141 sys 0m0.077s 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:27.141 ************************************ 00:11:27.141 END TEST filesystem_in_capsule_ext4 00:11:27.141 ************************************ 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.141 ************************************ 00:11:27.141 START TEST filesystem_in_capsule_btrfs 00:11:27.141 ************************************ 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:27.141 btrfs-progs v6.8.1 00:11:27.141 See https://btrfs.readthedocs.io for more information. 00:11:27.141 00:11:27.141 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:27.141 NOTE: several default settings have changed in version 5.15, please make sure 00:11:27.141 this does not affect your deployments: 00:11:27.141 - DUP for metadata (-m dup) 00:11:27.141 - enabled no-holes (-O no-holes) 00:11:27.141 - enabled free-space-tree (-R free-space-tree) 00:11:27.141 00:11:27.141 Label: (null) 00:11:27.141 UUID: c9709e35-82a4-40e3-8561-093edf26e231 00:11:27.141 Node size: 16384 00:11:27.141 Sector size: 4096 (CPU page size: 4096) 00:11:27.141 Filesystem size: 510.00MiB 00:11:27.141 Block group profiles: 00:11:27.141 Data: single 8.00MiB 00:11:27.141 Metadata: DUP 32.00MiB 00:11:27.141 System: DUP 8.00MiB 00:11:27.141 SSD detected: yes 00:11:27.141 Zoned device: no 00:11:27.141 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:27.141 Checksum: crc32c 00:11:27.141 Number of devices: 1 00:11:27.141 Devices: 00:11:27.141 ID SIZE PATH 00:11:27.141 1 510.00MiB /dev/nvme0n1p1 00:11:27.141 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2291727 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:27.141 00:11:27.141 real 0m0.379s 00:11:27.141 user 0m0.035s 00:11:27.141 sys 0m0.109s 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:27.141 ************************************ 00:11:27.141 END TEST filesystem_in_capsule_btrfs 00:11:27.141 ************************************ 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.141 ************************************ 00:11:27.141 START TEST filesystem_in_capsule_xfs 00:11:27.141 ************************************ 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:27.141 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:27.141 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:27.141 = sectsz=512 attr=2, projid32bit=1 00:11:27.141 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:27.141 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:27.142 data = bsize=4096 blocks=130560, imaxpct=25 00:11:27.142 = sunit=0 swidth=0 blks 00:11:27.142 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:27.142 log =internal log bsize=4096 blocks=16384, version=2 00:11:27.142 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:27.142 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:28.081 Discarding blocks...Done. 00:11:28.081 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:28.081 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:30.622 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:30.622 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:30.622 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:30.622 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:30.622 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:30.622 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:30.622 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2291727 00:11:30.622 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:30.622 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:30.622 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:30.622 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:30.622 00:11:30.622 real 0m3.616s 00:11:30.622 user 0m0.030s 00:11:30.622 sys 0m0.073s 00:11:30.622 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:30.622 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:30.622 ************************************ 00:11:30.622 END TEST filesystem_in_capsule_xfs 00:11:30.622 ************************************ 00:11:30.622 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:30.622 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:30.622 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:30.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2291727 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 2291727 ']' 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 2291727 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2291727 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2291727' 00:11:30.883 killing process with pid 2291727 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 2291727 00:11:30.883 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 2291727 00:11:31.143 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:31.143 00:11:31.143 real 0m22.216s 00:11:31.143 user 1m27.906s 00:11:31.143 sys 0m1.497s 00:11:31.143 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:31.143 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.143 ************************************ 00:11:31.143 END TEST nvmf_filesystem_in_capsule 00:11:31.143 ************************************ 00:11:31.143 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:31.143 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:31.143 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:31.143 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:31.143 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:31.143 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:31.143 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:31.143 rmmod nvme_tcp 00:11:31.143 rmmod nvme_fabrics 00:11:31.143 rmmod nvme_keyring 00:11:31.403 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:31.403 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:31.403 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:31.403 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:31.403 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:31.403 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:31.403 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:31.403 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:31.403 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:31.403 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:31.403 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:31.403 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:31.403 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:31.403 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.403 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.403 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.313 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:33.313 00:11:33.313 real 0m51.867s 00:11:33.313 user 2m46.150s 00:11:33.313 sys 0m8.927s 00:11:33.313 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:33.313 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.313 ************************************ 00:11:33.313 END TEST nvmf_filesystem 00:11:33.313 ************************************ 00:11:33.313 13:53:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:33.313 13:53:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:33.313 13:53:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:33.313 13:53:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:33.574 ************************************ 00:11:33.574 START TEST nvmf_target_discovery 00:11:33.574 ************************************ 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:33.574 * Looking for test storage... 00:11:33.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:33.574 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:33.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.575 --rc genhtml_branch_coverage=1 00:11:33.575 --rc genhtml_function_coverage=1 00:11:33.575 --rc genhtml_legend=1 00:11:33.575 --rc geninfo_all_blocks=1 00:11:33.575 --rc geninfo_unexecuted_blocks=1 00:11:33.575 00:11:33.575 ' 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:33.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.575 --rc genhtml_branch_coverage=1 00:11:33.575 --rc genhtml_function_coverage=1 00:11:33.575 --rc genhtml_legend=1 00:11:33.575 --rc geninfo_all_blocks=1 00:11:33.575 --rc geninfo_unexecuted_blocks=1 00:11:33.575 00:11:33.575 ' 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:33.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.575 --rc genhtml_branch_coverage=1 00:11:33.575 --rc genhtml_function_coverage=1 00:11:33.575 --rc genhtml_legend=1 00:11:33.575 --rc geninfo_all_blocks=1 00:11:33.575 --rc geninfo_unexecuted_blocks=1 00:11:33.575 00:11:33.575 ' 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:33.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.575 --rc genhtml_branch_coverage=1 00:11:33.575 --rc genhtml_function_coverage=1 00:11:33.575 --rc genhtml_legend=1 00:11:33.575 --rc geninfo_all_blocks=1 00:11:33.575 --rc geninfo_unexecuted_blocks=1 00:11:33.575 00:11:33.575 ' 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.575 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.836 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:33.836 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:33.836 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:33.836 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:41.979 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:41.979 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:41.979 Found net devices under 0000:31:00.0: cvl_0_0 00:11:41.979 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:41.980 Found net devices under 0000:31:00.1: cvl_0_1 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:41.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:11:41.980 00:11:41.980 --- 10.0.0.2 ping statistics --- 00:11:41.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.980 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:11:41.980 00:11:41.980 --- 10.0.0.1 ping statistics --- 00:11:41.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.980 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2300632 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2300632 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 2300632 ']' 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:41.980 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.980 [2024-11-06 13:53:27.586952] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:11:41.980 [2024-11-06 13:53:27.587019] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.980 [2024-11-06 13:53:27.686728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.980 [2024-11-06 13:53:27.739965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.980 [2024-11-06 13:53:27.740020] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.980 [2024-11-06 13:53:27.740029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.980 [2024-11-06 13:53:27.740037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.980 [2024-11-06 13:53:27.740043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.980 [2024-11-06 13:53:27.742119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.980 [2024-11-06 13:53:27.742277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.980 [2024-11-06 13:53:27.742407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.980 [2024-11-06 13:53:27.742408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.242 [2024-11-06 13:53:28.464764] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.242 Null1 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.242 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.504 [2024-11-06 13:53:28.535035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.504 Null2 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.504 Null3 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.504 Null4 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.504 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.505 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:42.505 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.505 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.505 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.505 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:42.505 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.505 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.505 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.505 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 4420 00:11:42.767 00:11:42.767 Discovery Log Number of Records 6, Generation counter 6 00:11:42.767 =====Discovery Log Entry 0====== 00:11:42.767 trtype: tcp 00:11:42.767 adrfam: ipv4 00:11:42.767 subtype: current discovery subsystem 00:11:42.767 treq: not required 00:11:42.767 portid: 0 00:11:42.767 trsvcid: 4420 00:11:42.767 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:42.767 traddr: 10.0.0.2 00:11:42.767 eflags: explicit discovery connections, duplicate discovery information 00:11:42.767 sectype: none 00:11:42.767 =====Discovery Log Entry 1====== 00:11:42.767 trtype: tcp 00:11:42.767 adrfam: ipv4 00:11:42.767 subtype: nvme subsystem 00:11:42.767 treq: not required 00:11:42.767 portid: 0 00:11:42.767 trsvcid: 4420 00:11:42.767 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:42.767 traddr: 10.0.0.2 00:11:42.767 eflags: none 00:11:42.767 sectype: none 00:11:42.767 =====Discovery Log Entry 2====== 00:11:42.767 trtype: tcp 00:11:42.767 adrfam: ipv4 00:11:42.767 subtype: nvme subsystem 00:11:42.767 treq: not required 00:11:42.767 portid: 0 00:11:42.767 trsvcid: 4420 00:11:42.767 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:42.767 traddr: 10.0.0.2 00:11:42.767 eflags: none 00:11:42.767 sectype: none 00:11:42.767 =====Discovery Log Entry 3====== 00:11:42.767 trtype: tcp 00:11:42.767 adrfam: ipv4 00:11:42.767 subtype: nvme subsystem 00:11:42.767 treq: not required 00:11:42.767 portid: 0 00:11:42.767 trsvcid: 4420 00:11:42.767 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:42.767 traddr: 10.0.0.2 00:11:42.767 eflags: none 00:11:42.767 sectype: none 00:11:42.767 =====Discovery Log Entry 4====== 00:11:42.767 trtype: tcp 00:11:42.767 adrfam: ipv4 00:11:42.767 subtype: nvme subsystem 00:11:42.767 treq: not required 00:11:42.767 portid: 0 00:11:42.767 trsvcid: 4420 00:11:42.767 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:42.767 traddr: 10.0.0.2 00:11:42.767 eflags: none 00:11:42.767 sectype: none 00:11:42.767 =====Discovery Log Entry 5====== 00:11:42.767 trtype: tcp 00:11:42.767 adrfam: ipv4 00:11:42.767 subtype: discovery subsystem referral 00:11:42.767 treq: not required 00:11:42.767 portid: 0 00:11:42.767 trsvcid: 4430 00:11:42.767 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:42.767 traddr: 10.0.0.2 00:11:42.767 eflags: none 00:11:42.767 sectype: none 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:42.767 Perform nvmf subsystem discovery via RPC 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.767 [ 00:11:42.767 { 00:11:42.767 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:42.767 "subtype": "Discovery", 00:11:42.767 "listen_addresses": [ 00:11:42.767 { 00:11:42.767 "trtype": "TCP", 00:11:42.767 "adrfam": "IPv4", 00:11:42.767 "traddr": "10.0.0.2", 00:11:42.767 "trsvcid": "4420" 00:11:42.767 } 00:11:42.767 ], 00:11:42.767 "allow_any_host": true, 00:11:42.767 "hosts": [] 00:11:42.767 }, 00:11:42.767 { 00:11:42.767 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.767 "subtype": "NVMe", 00:11:42.767 "listen_addresses": [ 00:11:42.767 { 00:11:42.767 "trtype": "TCP", 00:11:42.767 "adrfam": "IPv4", 00:11:42.767 "traddr": "10.0.0.2", 00:11:42.767 "trsvcid": "4420" 00:11:42.767 } 00:11:42.767 ], 00:11:42.767 "allow_any_host": true, 00:11:42.767 "hosts": [], 00:11:42.767 "serial_number": "SPDK00000000000001", 00:11:42.767 "model_number": "SPDK bdev Controller", 00:11:42.767 "max_namespaces": 32, 00:11:42.767 "min_cntlid": 1, 00:11:42.767 "max_cntlid": 65519, 00:11:42.767 "namespaces": [ 00:11:42.767 { 00:11:42.767 "nsid": 1, 00:11:42.767 "bdev_name": "Null1", 00:11:42.767 "name": "Null1", 00:11:42.767 "nguid": "2E2FB67B93624367AA225E9B3C95C42A", 00:11:42.767 "uuid": "2e2fb67b-9362-4367-aa22-5e9b3c95c42a" 00:11:42.767 } 00:11:42.767 ] 00:11:42.767 }, 00:11:42.767 { 00:11:42.767 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:42.767 "subtype": "NVMe", 00:11:42.767 "listen_addresses": [ 00:11:42.767 { 00:11:42.767 "trtype": "TCP", 00:11:42.767 "adrfam": "IPv4", 00:11:42.767 "traddr": "10.0.0.2", 00:11:42.767 "trsvcid": "4420" 00:11:42.767 } 00:11:42.767 ], 00:11:42.767 "allow_any_host": true, 00:11:42.767 "hosts": [], 00:11:42.767 "serial_number": "SPDK00000000000002", 00:11:42.767 "model_number": "SPDK bdev Controller", 00:11:42.767 "max_namespaces": 32, 00:11:42.767 "min_cntlid": 1, 00:11:42.767 "max_cntlid": 65519, 00:11:42.767 "namespaces": [ 00:11:42.767 { 00:11:42.767 "nsid": 1, 00:11:42.767 "bdev_name": "Null2", 00:11:42.767 "name": "Null2", 00:11:42.767 "nguid": "3351327704634476A9243EDA8AA3B5CC", 00:11:42.767 "uuid": "33513277-0463-4476-a924-3eda8aa3b5cc" 00:11:42.767 } 00:11:42.767 ] 00:11:42.767 }, 00:11:42.767 { 00:11:42.767 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:42.767 "subtype": "NVMe", 00:11:42.767 "listen_addresses": [ 00:11:42.767 { 00:11:42.767 "trtype": "TCP", 00:11:42.767 "adrfam": "IPv4", 00:11:42.767 "traddr": "10.0.0.2", 00:11:42.767 "trsvcid": "4420" 00:11:42.767 } 00:11:42.767 ], 00:11:42.767 "allow_any_host": true, 00:11:42.767 "hosts": [], 00:11:42.767 "serial_number": "SPDK00000000000003", 00:11:42.767 "model_number": "SPDK bdev Controller", 00:11:42.767 "max_namespaces": 32, 00:11:42.767 "min_cntlid": 1, 00:11:42.767 "max_cntlid": 65519, 00:11:42.767 "namespaces": [ 00:11:42.767 { 00:11:42.767 "nsid": 1, 00:11:42.767 "bdev_name": "Null3", 00:11:42.767 "name": "Null3", 00:11:42.767 "nguid": "7864B7DE951743E88F0AB9BE27BB8DDF", 00:11:42.767 "uuid": "7864b7de-9517-43e8-8f0a-b9be27bb8ddf" 00:11:42.767 } 00:11:42.767 ] 00:11:42.767 }, 00:11:42.767 { 00:11:42.767 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:42.767 "subtype": "NVMe", 00:11:42.767 "listen_addresses": [ 00:11:42.767 { 00:11:42.767 "trtype": "TCP", 00:11:42.767 "adrfam": "IPv4", 00:11:42.767 "traddr": "10.0.0.2", 00:11:42.767 "trsvcid": "4420" 00:11:42.767 } 00:11:42.767 ], 00:11:42.767 "allow_any_host": true, 00:11:42.767 "hosts": [], 00:11:42.767 "serial_number": "SPDK00000000000004", 00:11:42.767 "model_number": "SPDK bdev Controller", 00:11:42.767 "max_namespaces": 32, 00:11:42.767 "min_cntlid": 1, 00:11:42.767 "max_cntlid": 65519, 00:11:42.767 "namespaces": [ 00:11:42.767 { 00:11:42.767 "nsid": 1, 00:11:42.767 "bdev_name": "Null4", 00:11:42.767 "name": "Null4", 00:11:42.767 "nguid": "A9EE2B45BB414D99AB77B8617073327C", 00:11:42.767 "uuid": "a9ee2b45-bb41-4d99-ab77-b8617073327c" 00:11:42.767 } 00:11:42.767 ] 00:11:42.767 } 00:11:42.767 ] 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.767 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.768 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:42.768 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:42.768 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.768 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.768 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.768 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:42.768 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:42.768 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:43.029 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:43.029 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:43.029 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:43.029 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:43.029 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:43.029 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.029 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:43.029 rmmod nvme_tcp 00:11:43.029 rmmod nvme_fabrics 00:11:43.029 rmmod nvme_keyring 00:11:43.030 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.030 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:43.030 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:43.030 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2300632 ']' 00:11:43.030 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2300632 00:11:43.030 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 2300632 ']' 00:11:43.030 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 2300632 00:11:43.030 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:11:43.030 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:43.030 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2300632 00:11:43.030 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:43.030 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:43.030 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2300632' 00:11:43.030 killing process with pid 2300632 00:11:43.030 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 2300632 00:11:43.030 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 2300632 00:11:43.290 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:43.290 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:43.290 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:43.290 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:43.290 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:43.290 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:43.290 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:43.290 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.290 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:43.291 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.291 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.291 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.206 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:45.206 00:11:45.206 real 0m11.823s 00:11:45.206 user 0m8.764s 00:11:45.206 sys 0m6.253s 00:11:45.206 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:45.206 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.206 ************************************ 00:11:45.206 END TEST nvmf_target_discovery 00:11:45.206 ************************************ 00:11:45.206 13:53:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:45.206 13:53:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:45.206 13:53:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:45.206 13:53:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:45.467 ************************************ 00:11:45.467 START TEST nvmf_referrals 00:11:45.467 ************************************ 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:45.467 * Looking for test storage... 00:11:45.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:45.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.467 --rc genhtml_branch_coverage=1 00:11:45.467 --rc genhtml_function_coverage=1 00:11:45.467 --rc genhtml_legend=1 00:11:45.467 --rc geninfo_all_blocks=1 00:11:45.467 --rc geninfo_unexecuted_blocks=1 00:11:45.467 00:11:45.467 ' 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:45.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.467 --rc genhtml_branch_coverage=1 00:11:45.467 --rc genhtml_function_coverage=1 00:11:45.467 --rc genhtml_legend=1 00:11:45.467 --rc geninfo_all_blocks=1 00:11:45.467 --rc geninfo_unexecuted_blocks=1 00:11:45.467 00:11:45.467 ' 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:45.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.467 --rc genhtml_branch_coverage=1 00:11:45.467 --rc genhtml_function_coverage=1 00:11:45.467 --rc genhtml_legend=1 00:11:45.467 --rc geninfo_all_blocks=1 00:11:45.467 --rc geninfo_unexecuted_blocks=1 00:11:45.467 00:11:45.467 ' 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:45.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.467 --rc genhtml_branch_coverage=1 00:11:45.467 --rc genhtml_function_coverage=1 00:11:45.467 --rc genhtml_legend=1 00:11:45.467 --rc geninfo_all_blocks=1 00:11:45.467 --rc geninfo_unexecuted_blocks=1 00:11:45.467 00:11:45.467 ' 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:45.467 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.468 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:45.729 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:53.966 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:53.967 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:53.967 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:53.967 Found net devices under 0000:31:00.0: cvl_0_0 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:53.967 Found net devices under 0000:31:00.1: cvl_0_1 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:53.967 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:53.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:11:53.967 00:11:53.967 --- 10.0.0.2 ping statistics --- 00:11:53.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.967 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:11:53.967 00:11:53.967 --- 10.0.0.1 ping statistics --- 00:11:53.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.967 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2305085 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2305085 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 2305085 ']' 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:53.967 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.967 [2024-11-06 13:53:39.412431] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:11:53.967 [2024-11-06 13:53:39.412501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.967 [2024-11-06 13:53:39.515106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.967 [2024-11-06 13:53:39.568511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.967 [2024-11-06 13:53:39.568567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.967 [2024-11-06 13:53:39.568575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.967 [2024-11-06 13:53:39.568583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.967 [2024-11-06 13:53:39.568589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.967 [2024-11-06 13:53:39.570729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.968 [2024-11-06 13:53:39.570892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.968 [2024-11-06 13:53:39.571225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.968 [2024-11-06 13:53:39.571228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.968 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:54.271 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:11:54.271 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:54.271 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:54.271 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.271 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.271 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.272 [2024-11-06 13:53:40.289173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.272 [2024-11-06 13:53:40.319995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:54.272 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:54.549 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:54.552 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:54.813 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:54.813 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:54.813 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:54.813 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.813 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.813 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.813 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:54.813 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.813 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.813 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.813 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:54.813 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:54.813 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:54.813 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.813 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:54.813 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.813 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:54.813 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.813 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:54.813 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:54.813 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:54.813 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:54.813 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:54.813 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.814 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:54.814 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:55.075 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:55.075 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:55.075 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:55.075 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:55.075 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:55.075 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.075 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:55.337 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:55.337 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:55.337 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:55.337 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:55.337 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.337 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.598 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:55.599 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:55.599 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:55.599 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:55.599 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:55.599 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:55.599 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:55.599 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.599 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:55.859 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:55.859 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:55.859 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:55.859 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:55.859 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.859 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:56.119 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:56.119 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:56.119 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.119 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.119 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.119 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:56.119 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:56.119 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.119 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.119 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.119 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:56.119 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:56.120 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:56.120 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:56.120 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:56.120 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:56.120 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:56.380 rmmod nvme_tcp 00:11:56.380 rmmod nvme_fabrics 00:11:56.380 rmmod nvme_keyring 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2305085 ']' 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2305085 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 2305085 ']' 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 2305085 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2305085 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2305085' 00:11:56.380 killing process with pid 2305085 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 2305085 00:11:56.380 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 2305085 00:11:56.640 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:56.640 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:56.640 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:56.640 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:56.640 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:56.640 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:56.640 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:56.640 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:56.640 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:56.640 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.640 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.640 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.554 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:58.554 00:11:58.554 real 0m13.309s 00:11:58.554 user 0m15.777s 00:11:58.554 sys 0m6.593s 00:11:58.554 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:58.554 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.554 ************************************ 00:11:58.554 END TEST nvmf_referrals 00:11:58.554 ************************************ 00:11:58.815 13:53:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:58.815 13:53:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:58.815 13:53:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:58.815 13:53:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:58.815 ************************************ 00:11:58.815 START TEST nvmf_connect_disconnect 00:11:58.815 ************************************ 00:11:58.815 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:58.815 * Looking for test storage... 00:11:58.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:58.815 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:59.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.076 --rc genhtml_branch_coverage=1 00:11:59.076 --rc genhtml_function_coverage=1 00:11:59.076 --rc genhtml_legend=1 00:11:59.076 --rc geninfo_all_blocks=1 00:11:59.076 --rc geninfo_unexecuted_blocks=1 00:11:59.076 00:11:59.076 ' 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:59.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.076 --rc genhtml_branch_coverage=1 00:11:59.076 --rc genhtml_function_coverage=1 00:11:59.076 --rc genhtml_legend=1 00:11:59.076 --rc geninfo_all_blocks=1 00:11:59.076 --rc geninfo_unexecuted_blocks=1 00:11:59.076 00:11:59.076 ' 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:59.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.076 --rc genhtml_branch_coverage=1 00:11:59.076 --rc genhtml_function_coverage=1 00:11:59.076 --rc genhtml_legend=1 00:11:59.076 --rc geninfo_all_blocks=1 00:11:59.076 --rc geninfo_unexecuted_blocks=1 00:11:59.076 00:11:59.076 ' 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:59.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.076 --rc genhtml_branch_coverage=1 00:11:59.076 --rc genhtml_function_coverage=1 00:11:59.076 --rc genhtml_legend=1 00:11:59.076 --rc geninfo_all_blocks=1 00:11:59.076 --rc geninfo_unexecuted_blocks=1 00:11:59.076 00:11:59.076 ' 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.076 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:59.077 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.213 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:07.214 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:07.214 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:07.214 Found net devices under 0000:31:00.0: cvl_0_0 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:07.214 Found net devices under 0000:31:00.1: cvl_0_1 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:12:07.214 00:12:07.214 --- 10.0.0.2 ping statistics --- 00:12:07.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.214 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:12:07.214 00:12:07.214 --- 10.0.0.1 ping statistics --- 00:12:07.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.214 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2310198 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2310198 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 2310198 ']' 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.214 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:07.215 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.215 [2024-11-06 13:53:52.819282] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:12:07.215 [2024-11-06 13:53:52.819352] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.215 [2024-11-06 13:53:52.918793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.215 [2024-11-06 13:53:52.971590] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.215 [2024-11-06 13:53:52.971641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.215 [2024-11-06 13:53:52.971649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.215 [2024-11-06 13:53:52.971656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.215 [2024-11-06 13:53:52.971663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.215 [2024-11-06 13:53:52.974097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.215 [2024-11-06 13:53:52.974256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.215 [2024-11-06 13:53:52.974414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.215 [2024-11-06 13:53:52.974415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.475 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:07.475 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:12:07.475 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:07.475 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:07.475 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.475 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.475 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:07.475 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.475 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.475 [2024-11-06 13:53:53.700279] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.475 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.475 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:07.475 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.475 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.475 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.475 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:07.475 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:07.475 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.475 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.736 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.736 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:07.736 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.736 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.736 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.736 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.736 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.736 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.736 [2024-11-06 13:53:53.778220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.736 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.736 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:07.736 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:07.736 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:11.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:26.039 rmmod nvme_tcp 00:12:26.039 rmmod nvme_fabrics 00:12:26.039 rmmod nvme_keyring 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2310198 ']' 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2310198 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 2310198 ']' 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 2310198 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2310198 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2310198' 00:12:26.039 killing process with pid 2310198 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 2310198 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 2310198 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.039 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.581 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:28.581 00:12:28.581 real 0m29.462s 00:12:28.581 user 1m19.110s 00:12:28.581 sys 0m7.165s 00:12:28.581 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:28.581 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:28.581 ************************************ 00:12:28.581 END TEST nvmf_connect_disconnect 00:12:28.581 ************************************ 00:12:28.581 13:54:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:28.581 13:54:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:28.581 13:54:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:28.581 13:54:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:28.581 ************************************ 00:12:28.581 START TEST nvmf_multitarget 00:12:28.581 ************************************ 00:12:28.581 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:28.581 * Looking for test storage... 00:12:28.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:28.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.582 --rc genhtml_branch_coverage=1 00:12:28.582 --rc genhtml_function_coverage=1 00:12:28.582 --rc genhtml_legend=1 00:12:28.582 --rc geninfo_all_blocks=1 00:12:28.582 --rc geninfo_unexecuted_blocks=1 00:12:28.582 00:12:28.582 ' 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:28.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.582 --rc genhtml_branch_coverage=1 00:12:28.582 --rc genhtml_function_coverage=1 00:12:28.582 --rc genhtml_legend=1 00:12:28.582 --rc geninfo_all_blocks=1 00:12:28.582 --rc geninfo_unexecuted_blocks=1 00:12:28.582 00:12:28.582 ' 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:28.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.582 --rc genhtml_branch_coverage=1 00:12:28.582 --rc genhtml_function_coverage=1 00:12:28.582 --rc genhtml_legend=1 00:12:28.582 --rc geninfo_all_blocks=1 00:12:28.582 --rc geninfo_unexecuted_blocks=1 00:12:28.582 00:12:28.582 ' 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:28.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.582 --rc genhtml_branch_coverage=1 00:12:28.582 --rc genhtml_function_coverage=1 00:12:28.582 --rc genhtml_legend=1 00:12:28.582 --rc geninfo_all_blocks=1 00:12:28.582 --rc geninfo_unexecuted_blocks=1 00:12:28.582 00:12:28.582 ' 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.582 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:28.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:28.583 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:28.583 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:28.583 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:28.583 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:28.583 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:28.583 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:28.583 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.583 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:28.583 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:28.583 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:28.583 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.583 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.583 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.583 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:28.583 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:28.583 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:28.583 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:36.716 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:36.716 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:36.716 Found net devices under 0000:31:00.0: cvl_0_0 00:12:36.716 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:36.717 Found net devices under 0000:31:00.1: cvl_0_1 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.717 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:36.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:12:36.717 00:12:36.717 --- 10.0.0.2 ping statistics --- 00:12:36.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.717 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:12:36.717 00:12:36.717 --- 10.0.0.1 ping statistics --- 00:12:36.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.717 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2318632 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2318632 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 2318632 ']' 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:36.717 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:36.717 [2024-11-06 13:54:22.305209] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:12:36.717 [2024-11-06 13:54:22.305277] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.717 [2024-11-06 13:54:22.406850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.717 [2024-11-06 13:54:22.460329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.717 [2024-11-06 13:54:22.460379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.717 [2024-11-06 13:54:22.460388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.717 [2024-11-06 13:54:22.460395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.717 [2024-11-06 13:54:22.460402] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.717 [2024-11-06 13:54:22.462821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.717 [2024-11-06 13:54:22.463167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.717 [2024-11-06 13:54:22.463299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.717 [2024-11-06 13:54:22.463302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.977 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:36.977 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:12:36.977 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:36.977 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:36.977 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:36.977 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.977 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:36.977 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:36.977 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:37.238 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:37.238 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:37.238 "nvmf_tgt_1" 00:12:37.238 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:37.238 "nvmf_tgt_2" 00:12:37.498 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:37.498 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:37.498 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:37.498 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:37.498 true 00:12:37.498 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:37.759 true 00:12:37.759 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:37.759 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:37.759 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:37.759 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:37.759 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:37.759 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.759 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:37.759 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.759 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:37.759 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.759 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.759 rmmod nvme_tcp 00:12:37.759 rmmod nvme_fabrics 00:12:38.020 rmmod nvme_keyring 00:12:38.020 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:38.020 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:38.020 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:38.020 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2318632 ']' 00:12:38.020 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2318632 00:12:38.020 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 2318632 ']' 00:12:38.020 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 2318632 00:12:38.020 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:12:38.020 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:38.020 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2318632 00:12:38.020 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:38.020 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:38.020 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2318632' 00:12:38.020 killing process with pid 2318632 00:12:38.020 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 2318632 00:12:38.020 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 2318632 00:12:38.280 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:38.280 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:38.280 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:38.280 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:38.280 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:38.280 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:38.280 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:38.280 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:38.280 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:38.280 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.280 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.280 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.194 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:40.194 00:12:40.194 real 0m11.947s 00:12:40.194 user 0m10.401s 00:12:40.194 sys 0m6.197s 00:12:40.194 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:40.194 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:40.194 ************************************ 00:12:40.194 END TEST nvmf_multitarget 00:12:40.194 ************************************ 00:12:40.194 13:54:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:40.194 13:54:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:40.194 13:54:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:40.194 13:54:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:40.457 ************************************ 00:12:40.457 START TEST nvmf_rpc 00:12:40.457 ************************************ 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:40.457 * Looking for test storage... 00:12:40.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:40.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.457 --rc genhtml_branch_coverage=1 00:12:40.457 --rc genhtml_function_coverage=1 00:12:40.457 --rc genhtml_legend=1 00:12:40.457 --rc geninfo_all_blocks=1 00:12:40.457 --rc geninfo_unexecuted_blocks=1 00:12:40.457 00:12:40.457 ' 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:40.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.457 --rc genhtml_branch_coverage=1 00:12:40.457 --rc genhtml_function_coverage=1 00:12:40.457 --rc genhtml_legend=1 00:12:40.457 --rc geninfo_all_blocks=1 00:12:40.457 --rc geninfo_unexecuted_blocks=1 00:12:40.457 00:12:40.457 ' 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:40.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.457 --rc genhtml_branch_coverage=1 00:12:40.457 --rc genhtml_function_coverage=1 00:12:40.457 --rc genhtml_legend=1 00:12:40.457 --rc geninfo_all_blocks=1 00:12:40.457 --rc geninfo_unexecuted_blocks=1 00:12:40.457 00:12:40.457 ' 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:40.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.457 --rc genhtml_branch_coverage=1 00:12:40.457 --rc genhtml_function_coverage=1 00:12:40.457 --rc genhtml_legend=1 00:12:40.457 --rc geninfo_all_blocks=1 00:12:40.457 --rc geninfo_unexecuted_blocks=1 00:12:40.457 00:12:40.457 ' 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.457 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:40.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:40.458 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.654 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:48.654 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:48.655 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:48.655 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:48.655 Found net devices under 0000:31:00.0: cvl_0_0 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:48.655 Found net devices under 0000:31:00.1: cvl_0_1 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:48.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:12:48.655 00:12:48.655 --- 10.0.0.2 ping statistics --- 00:12:48.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.655 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:12:48.655 00:12:48.655 --- 10.0.0.1 ping statistics --- 00:12:48.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.655 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2323332 00:12:48.655 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2323332 00:12:48.656 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.656 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 2323332 ']' 00:12:48.656 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.656 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:48.656 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.656 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:48.656 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.656 [2024-11-06 13:54:34.440640] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:12:48.656 [2024-11-06 13:54:34.440711] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.656 [2024-11-06 13:54:34.542207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.656 [2024-11-06 13:54:34.594086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.656 [2024-11-06 13:54:34.594134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.656 [2024-11-06 13:54:34.594142] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.656 [2024-11-06 13:54:34.594149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.656 [2024-11-06 13:54:34.594156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.656 [2024-11-06 13:54:34.596542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.656 [2024-11-06 13:54:34.596738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.656 [2024-11-06 13:54:34.596875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.656 [2024-11-06 13:54:34.596875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:49.229 "tick_rate": 2400000000, 00:12:49.229 "poll_groups": [ 00:12:49.229 { 00:12:49.229 "name": "nvmf_tgt_poll_group_000", 00:12:49.229 "admin_qpairs": 0, 00:12:49.229 "io_qpairs": 0, 00:12:49.229 "current_admin_qpairs": 0, 00:12:49.229 "current_io_qpairs": 0, 00:12:49.229 "pending_bdev_io": 0, 00:12:49.229 "completed_nvme_io": 0, 00:12:49.229 "transports": [] 00:12:49.229 }, 00:12:49.229 { 00:12:49.229 "name": "nvmf_tgt_poll_group_001", 00:12:49.229 "admin_qpairs": 0, 00:12:49.229 "io_qpairs": 0, 00:12:49.229 "current_admin_qpairs": 0, 00:12:49.229 "current_io_qpairs": 0, 00:12:49.229 "pending_bdev_io": 0, 00:12:49.229 "completed_nvme_io": 0, 00:12:49.229 "transports": [] 00:12:49.229 }, 00:12:49.229 { 00:12:49.229 "name": "nvmf_tgt_poll_group_002", 00:12:49.229 "admin_qpairs": 0, 00:12:49.229 "io_qpairs": 0, 00:12:49.229 "current_admin_qpairs": 0, 00:12:49.229 "current_io_qpairs": 0, 00:12:49.229 "pending_bdev_io": 0, 00:12:49.229 "completed_nvme_io": 0, 00:12:49.229 "transports": [] 00:12:49.229 }, 00:12:49.229 { 00:12:49.229 "name": "nvmf_tgt_poll_group_003", 00:12:49.229 "admin_qpairs": 0, 00:12:49.229 "io_qpairs": 0, 00:12:49.229 "current_admin_qpairs": 0, 00:12:49.229 "current_io_qpairs": 0, 00:12:49.229 "pending_bdev_io": 0, 00:12:49.229 "completed_nvme_io": 0, 00:12:49.229 "transports": [] 00:12:49.229 } 00:12:49.229 ] 00:12:49.229 }' 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.229 [2024-11-06 13:54:35.431220] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:49.229 "tick_rate": 2400000000, 00:12:49.229 "poll_groups": [ 00:12:49.229 { 00:12:49.229 "name": "nvmf_tgt_poll_group_000", 00:12:49.229 "admin_qpairs": 0, 00:12:49.229 "io_qpairs": 0, 00:12:49.229 "current_admin_qpairs": 0, 00:12:49.229 "current_io_qpairs": 0, 00:12:49.229 "pending_bdev_io": 0, 00:12:49.229 "completed_nvme_io": 0, 00:12:49.229 "transports": [ 00:12:49.229 { 00:12:49.229 "trtype": "TCP" 00:12:49.229 } 00:12:49.229 ] 00:12:49.229 }, 00:12:49.229 { 00:12:49.229 "name": "nvmf_tgt_poll_group_001", 00:12:49.229 "admin_qpairs": 0, 00:12:49.229 "io_qpairs": 0, 00:12:49.229 "current_admin_qpairs": 0, 00:12:49.229 "current_io_qpairs": 0, 00:12:49.229 "pending_bdev_io": 0, 00:12:49.229 "completed_nvme_io": 0, 00:12:49.229 "transports": [ 00:12:49.229 { 00:12:49.229 "trtype": "TCP" 00:12:49.229 } 00:12:49.229 ] 00:12:49.229 }, 00:12:49.229 { 00:12:49.229 "name": "nvmf_tgt_poll_group_002", 00:12:49.229 "admin_qpairs": 0, 00:12:49.229 "io_qpairs": 0, 00:12:49.229 "current_admin_qpairs": 0, 00:12:49.229 "current_io_qpairs": 0, 00:12:49.229 "pending_bdev_io": 0, 00:12:49.229 "completed_nvme_io": 0, 00:12:49.229 "transports": [ 00:12:49.229 { 00:12:49.229 "trtype": "TCP" 00:12:49.229 } 00:12:49.229 ] 00:12:49.229 }, 00:12:49.229 { 00:12:49.229 "name": "nvmf_tgt_poll_group_003", 00:12:49.229 "admin_qpairs": 0, 00:12:49.229 "io_qpairs": 0, 00:12:49.229 "current_admin_qpairs": 0, 00:12:49.229 "current_io_qpairs": 0, 00:12:49.229 "pending_bdev_io": 0, 00:12:49.229 "completed_nvme_io": 0, 00:12:49.229 "transports": [ 00:12:49.229 { 00:12:49.229 "trtype": "TCP" 00:12:49.229 } 00:12:49.229 ] 00:12:49.229 } 00:12:49.229 ] 00:12:49.229 }' 00:12:49.229 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:49.230 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:49.230 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:49.230 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.492 Malloc1 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.492 [2024-11-06 13:54:35.638638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:12:49.492 [2024-11-06 13:54:35.675676] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:12:49.492 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:49.492 could not add new controller: failed to write to nvme-fabrics device 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.492 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.406 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.406 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:51.406 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.406 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:51.406 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:53.318 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:53.319 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:53.319 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:53.319 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.319 [2024-11-06 13:54:39.403791] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:12:53.319 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:53.319 could not add new controller: failed to write to nvme-fabrics device 00:12:53.319 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:53.319 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:53.319 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:53.319 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:53.319 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:53.319 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.319 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.319 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.319 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.231 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.231 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:55.231 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.231 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:55.231 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.144 [2024-11-06 13:54:43.157216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.144 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.526 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.526 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:58.526 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.526 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:58.526 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:00.440 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:00.440 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:00.440 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.440 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:00.440 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.440 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:00.440 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.701 [2024-11-06 13:54:46.869650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.701 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.617 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.617 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:02.617 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.617 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:02.617 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.549 [2024-11-06 13:54:50.615853] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.549 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.935 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:05.935 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:05.935 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.935 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:05.935 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.475 [2024-11-06 13:54:54.333723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.475 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:09.858 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:09.858 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:09.858 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.858 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:09.858 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:11.769 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:11.769 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:11.769 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.769 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:11.769 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.769 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:11.769 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.769 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.769 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:11.769 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:11.769 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.769 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:11.769 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.769 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:11.769 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.769 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.769 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.030 [2024-11-06 13:54:58.085928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.030 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:13.412 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:13.412 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:13.412 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.412 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:13.412 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.957 [2024-11-06 13:55:01.928981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.957 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.958 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.958 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 [2024-11-06 13:55:01.997155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 [2024-11-06 13:55:02.065332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 [2024-11-06 13:55:02.137548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 [2024-11-06 13:55:02.205776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.958 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.219 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.219 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.219 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.219 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.219 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.219 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:16.219 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.219 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.219 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.219 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:16.219 "tick_rate": 2400000000, 00:13:16.219 "poll_groups": [ 00:13:16.219 { 00:13:16.219 "name": "nvmf_tgt_poll_group_000", 00:13:16.219 "admin_qpairs": 0, 00:13:16.219 "io_qpairs": 224, 00:13:16.219 "current_admin_qpairs": 0, 00:13:16.219 "current_io_qpairs": 0, 00:13:16.219 "pending_bdev_io": 0, 00:13:16.219 "completed_nvme_io": 384, 00:13:16.219 "transports": [ 00:13:16.219 { 00:13:16.219 "trtype": "TCP" 00:13:16.219 } 00:13:16.219 ] 00:13:16.219 }, 00:13:16.219 { 00:13:16.219 "name": "nvmf_tgt_poll_group_001", 00:13:16.219 "admin_qpairs": 1, 00:13:16.219 "io_qpairs": 223, 00:13:16.219 "current_admin_qpairs": 0, 00:13:16.219 "current_io_qpairs": 0, 00:13:16.219 "pending_bdev_io": 0, 00:13:16.219 "completed_nvme_io": 411, 00:13:16.219 "transports": [ 00:13:16.219 { 00:13:16.219 "trtype": "TCP" 00:13:16.219 } 00:13:16.219 ] 00:13:16.219 }, 00:13:16.219 { 00:13:16.219 "name": "nvmf_tgt_poll_group_002", 00:13:16.219 "admin_qpairs": 6, 00:13:16.219 "io_qpairs": 218, 00:13:16.219 "current_admin_qpairs": 0, 00:13:16.219 "current_io_qpairs": 0, 00:13:16.219 "pending_bdev_io": 0, 00:13:16.219 "completed_nvme_io": 218, 00:13:16.219 "transports": [ 00:13:16.219 { 00:13:16.219 "trtype": "TCP" 00:13:16.219 } 00:13:16.219 ] 00:13:16.219 }, 00:13:16.219 { 00:13:16.220 "name": "nvmf_tgt_poll_group_003", 00:13:16.220 "admin_qpairs": 0, 00:13:16.220 "io_qpairs": 224, 00:13:16.220 "current_admin_qpairs": 0, 00:13:16.220 "current_io_qpairs": 0, 00:13:16.220 "pending_bdev_io": 0, 00:13:16.220 "completed_nvme_io": 226, 00:13:16.220 "transports": [ 00:13:16.220 { 00:13:16.220 "trtype": "TCP" 00:13:16.220 } 00:13:16.220 ] 00:13:16.220 } 00:13:16.220 ] 00:13:16.220 }' 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:16.220 rmmod nvme_tcp 00:13:16.220 rmmod nvme_fabrics 00:13:16.220 rmmod nvme_keyring 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2323332 ']' 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2323332 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 2323332 ']' 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 2323332 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2323332 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2323332' 00:13:16.220 killing process with pid 2323332 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 2323332 00:13:16.220 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 2323332 00:13:16.481 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:16.481 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:16.481 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:16.481 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:16.481 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:16.481 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:16.481 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:16.481 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:16.482 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:16.482 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.482 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.482 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.030 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:19.030 00:13:19.030 real 0m38.214s 00:13:19.030 user 1m53.937s 00:13:19.030 sys 0m8.069s 00:13:19.030 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:19.030 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.030 ************************************ 00:13:19.030 END TEST nvmf_rpc 00:13:19.030 ************************************ 00:13:19.030 13:55:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:19.030 13:55:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:19.030 13:55:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:19.030 13:55:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:19.031 ************************************ 00:13:19.031 START TEST nvmf_invalid 00:13:19.031 ************************************ 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:19.031 * Looking for test storage... 00:13:19.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:19.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.031 --rc genhtml_branch_coverage=1 00:13:19.031 --rc genhtml_function_coverage=1 00:13:19.031 --rc genhtml_legend=1 00:13:19.031 --rc geninfo_all_blocks=1 00:13:19.031 --rc geninfo_unexecuted_blocks=1 00:13:19.031 00:13:19.031 ' 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:19.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.031 --rc genhtml_branch_coverage=1 00:13:19.031 --rc genhtml_function_coverage=1 00:13:19.031 --rc genhtml_legend=1 00:13:19.031 --rc geninfo_all_blocks=1 00:13:19.031 --rc geninfo_unexecuted_blocks=1 00:13:19.031 00:13:19.031 ' 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:19.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.031 --rc genhtml_branch_coverage=1 00:13:19.031 --rc genhtml_function_coverage=1 00:13:19.031 --rc genhtml_legend=1 00:13:19.031 --rc geninfo_all_blocks=1 00:13:19.031 --rc geninfo_unexecuted_blocks=1 00:13:19.031 00:13:19.031 ' 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:19.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.031 --rc genhtml_branch_coverage=1 00:13:19.031 --rc genhtml_function_coverage=1 00:13:19.031 --rc genhtml_legend=1 00:13:19.031 --rc geninfo_all_blocks=1 00:13:19.031 --rc geninfo_unexecuted_blocks=1 00:13:19.031 00:13:19.031 ' 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.031 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.031 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:19.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:19.032 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:27.178 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:27.178 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:27.178 Found net devices under 0000:31:00.0: cvl_0_0 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:27.178 Found net devices under 0000:31:00.1: cvl_0_1 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:27.178 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:27.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:13:27.179 00:13:27.179 --- 10.0.0.2 ping statistics --- 00:13:27.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.179 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:13:27.179 00:13:27.179 --- 10.0.0.1 ping statistics --- 00:13:27.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.179 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2333234 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2333234 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 2333234 ']' 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:27.179 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.179 [2024-11-06 13:55:12.711255] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:13:27.179 [2024-11-06 13:55:12.711321] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.179 [2024-11-06 13:55:12.813061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:27.179 [2024-11-06 13:55:12.866530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.179 [2024-11-06 13:55:12.866586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.179 [2024-11-06 13:55:12.866595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.179 [2024-11-06 13:55:12.866602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.179 [2024-11-06 13:55:12.866608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.179 [2024-11-06 13:55:12.869052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.179 [2024-11-06 13:55:12.869212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.179 [2024-11-06 13:55:12.869370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.179 [2024-11-06 13:55:12.869371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.440 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:27.440 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:13:27.440 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:27.440 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:27.440 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.440 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.440 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:27.440 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31453 00:13:27.701 [2024-11-06 13:55:13.752104] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:27.701 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:27.701 { 00:13:27.701 "nqn": "nqn.2016-06.io.spdk:cnode31453", 00:13:27.701 "tgt_name": "foobar", 00:13:27.701 "method": "nvmf_create_subsystem", 00:13:27.701 "req_id": 1 00:13:27.701 } 00:13:27.701 Got JSON-RPC error response 00:13:27.701 response: 00:13:27.701 { 00:13:27.701 "code": -32603, 00:13:27.701 "message": "Unable to find target foobar" 00:13:27.701 }' 00:13:27.701 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:27.701 { 00:13:27.701 "nqn": "nqn.2016-06.io.spdk:cnode31453", 00:13:27.701 "tgt_name": "foobar", 00:13:27.701 "method": "nvmf_create_subsystem", 00:13:27.701 "req_id": 1 00:13:27.701 } 00:13:27.701 Got JSON-RPC error response 00:13:27.701 response: 00:13:27.701 { 00:13:27.701 "code": -32603, 00:13:27.701 "message": "Unable to find target foobar" 00:13:27.701 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:27.701 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:27.701 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9777 00:13:27.701 [2024-11-06 13:55:13.960966] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9777: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:27.962 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:27.962 { 00:13:27.962 "nqn": "nqn.2016-06.io.spdk:cnode9777", 00:13:27.962 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:27.962 "method": "nvmf_create_subsystem", 00:13:27.962 "req_id": 1 00:13:27.962 } 00:13:27.962 Got JSON-RPC error response 00:13:27.962 response: 00:13:27.963 { 00:13:27.963 "code": -32602, 00:13:27.963 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:27.963 }' 00:13:27.963 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:27.963 { 00:13:27.963 "nqn": "nqn.2016-06.io.spdk:cnode9777", 00:13:27.963 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:27.963 "method": "nvmf_create_subsystem", 00:13:27.963 "req_id": 1 00:13:27.963 } 00:13:27.963 Got JSON-RPC error response 00:13:27.963 response: 00:13:27.963 { 00:13:27.963 "code": -32602, 00:13:27.963 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:27.963 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25574 00:13:27.963 [2024-11-06 13:55:14.169720] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25574: invalid model number 'SPDK_Controller' 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:27.963 { 00:13:27.963 "nqn": "nqn.2016-06.io.spdk:cnode25574", 00:13:27.963 "model_number": "SPDK_Controller\u001f", 00:13:27.963 "method": "nvmf_create_subsystem", 00:13:27.963 "req_id": 1 00:13:27.963 } 00:13:27.963 Got JSON-RPC error response 00:13:27.963 response: 00:13:27.963 { 00:13:27.963 "code": -32602, 00:13:27.963 "message": "Invalid MN SPDK_Controller\u001f" 00:13:27.963 }' 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:27.963 { 00:13:27.963 "nqn": "nqn.2016-06.io.spdk:cnode25574", 00:13:27.963 "model_number": "SPDK_Controller\u001f", 00:13:27.963 "method": "nvmf_create_subsystem", 00:13:27.963 "req_id": 1 00:13:27.963 } 00:13:27.963 Got JSON-RPC error response 00:13:27.963 response: 00:13:27.963 { 00:13:27.963 "code": -32602, 00:13:27.963 "message": "Invalid MN SPDK_Controller\u001f" 00:13:27.963 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.963 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:28.224 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ a == \- ]] 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'aiJ$$kXe+R95y@9^0dMYi' 00:13:28.225 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'aiJ$$kXe+R95y@9^0dMYi' nqn.2016-06.io.spdk:cnode24249 00:13:28.487 [2024-11-06 13:55:14.547180] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24249: invalid serial number 'aiJ$$kXe+R95y@9^0dMYi' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:28.487 { 00:13:28.487 "nqn": "nqn.2016-06.io.spdk:cnode24249", 00:13:28.487 "serial_number": "aiJ$$kXe+R95y@9^0dMYi", 00:13:28.487 "method": "nvmf_create_subsystem", 00:13:28.487 "req_id": 1 00:13:28.487 } 00:13:28.487 Got JSON-RPC error response 00:13:28.487 response: 00:13:28.487 { 00:13:28.487 "code": -32602, 00:13:28.487 "message": "Invalid SN aiJ$$kXe+R95y@9^0dMYi" 00:13:28.487 }' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:28.487 { 00:13:28.487 "nqn": "nqn.2016-06.io.spdk:cnode24249", 00:13:28.487 "serial_number": "aiJ$$kXe+R95y@9^0dMYi", 00:13:28.487 "method": "nvmf_create_subsystem", 00:13:28.487 "req_id": 1 00:13:28.487 } 00:13:28.487 Got JSON-RPC error response 00:13:28.487 response: 00:13:28.487 { 00:13:28.487 "code": -32602, 00:13:28.487 "message": "Invalid SN aiJ$$kXe+R95y@9^0dMYi" 00:13:28.487 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:28.487 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:28.488 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:28.750 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ x == \- ]] 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'x'\''T;?du&F<13'\'' i)SrD}*i%AB")X-fZc zywK>pu)' 00:13:28.751 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'x'\''T;?du&F<13'\'' i)SrD}*i%AB")X-fZc zywK>pu)' nqn.2016-06.io.spdk:cnode23939 00:13:29.011 [2024-11-06 13:55:15.077189] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23939: invalid model number 'x'T;?du&F<13' i)SrD}*i%AB")X-fZc zywK>pu)' 00:13:29.011 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:29.011 { 00:13:29.011 "nqn": "nqn.2016-06.io.spdk:cnode23939", 00:13:29.011 "model_number": "x'\''T;?du&F<13'\'' i)SrD}*i%AB\")X-fZc zywK>pu)", 00:13:29.011 "method": "nvmf_create_subsystem", 00:13:29.011 "req_id": 1 00:13:29.011 } 00:13:29.011 Got JSON-RPC error response 00:13:29.011 response: 00:13:29.011 { 00:13:29.011 "code": -32602, 00:13:29.011 "message": "Invalid MN x'\''T;?du&F<13'\'' i)SrD}*i%AB\")X-fZc zywK>pu)" 00:13:29.011 }' 00:13:29.011 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:29.011 { 00:13:29.011 "nqn": "nqn.2016-06.io.spdk:cnode23939", 00:13:29.011 "model_number": "x'T;?du&F<13' i)SrD}*i%AB\")X-fZc zywK>pu)", 00:13:29.012 "method": "nvmf_create_subsystem", 00:13:29.012 "req_id": 1 00:13:29.012 } 00:13:29.012 Got JSON-RPC error response 00:13:29.012 response: 00:13:29.012 { 00:13:29.012 "code": -32602, 00:13:29.012 "message": "Invalid MN x'T;?du&F<13' i)SrD}*i%AB\")X-fZc zywK>pu)" 00:13:29.012 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:29.012 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:29.012 [2024-11-06 13:55:15.261873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.272 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:29.272 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:29.272 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:29.272 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:29.272 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:29.272 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:29.534 [2024-11-06 13:55:15.644522] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:29.534 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:29.534 { 00:13:29.534 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:29.534 "listen_address": { 00:13:29.534 "trtype": "tcp", 00:13:29.534 "traddr": "", 00:13:29.534 "trsvcid": "4421" 00:13:29.534 }, 00:13:29.534 "method": "nvmf_subsystem_remove_listener", 00:13:29.534 "req_id": 1 00:13:29.534 } 00:13:29.534 Got JSON-RPC error response 00:13:29.534 response: 00:13:29.534 { 00:13:29.534 "code": -32602, 00:13:29.534 "message": "Invalid parameters" 00:13:29.534 }' 00:13:29.534 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:29.534 { 00:13:29.534 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:29.534 "listen_address": { 00:13:29.534 "trtype": "tcp", 00:13:29.534 "traddr": "", 00:13:29.534 "trsvcid": "4421" 00:13:29.534 }, 00:13:29.534 "method": "nvmf_subsystem_remove_listener", 00:13:29.534 "req_id": 1 00:13:29.534 } 00:13:29.534 Got JSON-RPC error response 00:13:29.534 response: 00:13:29.534 { 00:13:29.534 "code": -32602, 00:13:29.534 "message": "Invalid parameters" 00:13:29.534 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:29.534 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13092 -i 0 00:13:29.794 [2024-11-06 13:55:15.833065] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13092: invalid cntlid range [0-65519] 00:13:29.794 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:29.794 { 00:13:29.795 "nqn": "nqn.2016-06.io.spdk:cnode13092", 00:13:29.795 "min_cntlid": 0, 00:13:29.795 "method": "nvmf_create_subsystem", 00:13:29.795 "req_id": 1 00:13:29.795 } 00:13:29.795 Got JSON-RPC error response 00:13:29.795 response: 00:13:29.795 { 00:13:29.795 "code": -32602, 00:13:29.795 "message": "Invalid cntlid range [0-65519]" 00:13:29.795 }' 00:13:29.795 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:29.795 { 00:13:29.795 "nqn": "nqn.2016-06.io.spdk:cnode13092", 00:13:29.795 "min_cntlid": 0, 00:13:29.795 "method": "nvmf_create_subsystem", 00:13:29.795 "req_id": 1 00:13:29.795 } 00:13:29.795 Got JSON-RPC error response 00:13:29.795 response: 00:13:29.795 { 00:13:29.795 "code": -32602, 00:13:29.795 "message": "Invalid cntlid range [0-65519]" 00:13:29.795 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:29.795 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6962 -i 65520 00:13:29.795 [2024-11-06 13:55:16.021640] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6962: invalid cntlid range [65520-65519] 00:13:29.795 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:29.795 { 00:13:29.795 "nqn": "nqn.2016-06.io.spdk:cnode6962", 00:13:29.795 "min_cntlid": 65520, 00:13:29.795 "method": "nvmf_create_subsystem", 00:13:29.795 "req_id": 1 00:13:29.795 } 00:13:29.795 Got JSON-RPC error response 00:13:29.795 response: 00:13:29.795 { 00:13:29.795 "code": -32602, 00:13:29.795 "message": "Invalid cntlid range [65520-65519]" 00:13:29.795 }' 00:13:29.795 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:29.795 { 00:13:29.795 "nqn": "nqn.2016-06.io.spdk:cnode6962", 00:13:29.795 "min_cntlid": 65520, 00:13:29.795 "method": "nvmf_create_subsystem", 00:13:29.795 "req_id": 1 00:13:29.795 } 00:13:29.795 Got JSON-RPC error response 00:13:29.795 response: 00:13:29.795 { 00:13:29.795 "code": -32602, 00:13:29.795 "message": "Invalid cntlid range [65520-65519]" 00:13:29.795 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:29.795 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9069 -I 0 00:13:30.056 [2024-11-06 13:55:16.210253] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9069: invalid cntlid range [1-0] 00:13:30.056 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:30.056 { 00:13:30.056 "nqn": "nqn.2016-06.io.spdk:cnode9069", 00:13:30.056 "max_cntlid": 0, 00:13:30.056 "method": "nvmf_create_subsystem", 00:13:30.056 "req_id": 1 00:13:30.056 } 00:13:30.056 Got JSON-RPC error response 00:13:30.056 response: 00:13:30.056 { 00:13:30.056 "code": -32602, 00:13:30.056 "message": "Invalid cntlid range [1-0]" 00:13:30.056 }' 00:13:30.056 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:30.056 { 00:13:30.056 "nqn": "nqn.2016-06.io.spdk:cnode9069", 00:13:30.056 "max_cntlid": 0, 00:13:30.056 "method": "nvmf_create_subsystem", 00:13:30.056 "req_id": 1 00:13:30.056 } 00:13:30.056 Got JSON-RPC error response 00:13:30.056 response: 00:13:30.056 { 00:13:30.056 "code": -32602, 00:13:30.056 "message": "Invalid cntlid range [1-0]" 00:13:30.056 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:30.056 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2650 -I 65520 00:13:30.318 [2024-11-06 13:55:16.390826] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2650: invalid cntlid range [1-65520] 00:13:30.318 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:30.318 { 00:13:30.318 "nqn": "nqn.2016-06.io.spdk:cnode2650", 00:13:30.318 "max_cntlid": 65520, 00:13:30.318 "method": "nvmf_create_subsystem", 00:13:30.318 "req_id": 1 00:13:30.318 } 00:13:30.318 Got JSON-RPC error response 00:13:30.318 response: 00:13:30.318 { 00:13:30.318 "code": -32602, 00:13:30.318 "message": "Invalid cntlid range [1-65520]" 00:13:30.318 }' 00:13:30.318 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:30.318 { 00:13:30.318 "nqn": "nqn.2016-06.io.spdk:cnode2650", 00:13:30.318 "max_cntlid": 65520, 00:13:30.318 "method": "nvmf_create_subsystem", 00:13:30.318 "req_id": 1 00:13:30.318 } 00:13:30.318 Got JSON-RPC error response 00:13:30.318 response: 00:13:30.318 { 00:13:30.318 "code": -32602, 00:13:30.318 "message": "Invalid cntlid range [1-65520]" 00:13:30.318 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:30.318 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19087 -i 6 -I 5 00:13:30.318 [2024-11-06 13:55:16.571388] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19087: invalid cntlid range [6-5] 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:30.579 { 00:13:30.579 "nqn": "nqn.2016-06.io.spdk:cnode19087", 00:13:30.579 "min_cntlid": 6, 00:13:30.579 "max_cntlid": 5, 00:13:30.579 "method": "nvmf_create_subsystem", 00:13:30.579 "req_id": 1 00:13:30.579 } 00:13:30.579 Got JSON-RPC error response 00:13:30.579 response: 00:13:30.579 { 00:13:30.579 "code": -32602, 00:13:30.579 "message": "Invalid cntlid range [6-5]" 00:13:30.579 }' 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:30.579 { 00:13:30.579 "nqn": "nqn.2016-06.io.spdk:cnode19087", 00:13:30.579 "min_cntlid": 6, 00:13:30.579 "max_cntlid": 5, 00:13:30.579 "method": "nvmf_create_subsystem", 00:13:30.579 "req_id": 1 00:13:30.579 } 00:13:30.579 Got JSON-RPC error response 00:13:30.579 response: 00:13:30.579 { 00:13:30.579 "code": -32602, 00:13:30.579 "message": "Invalid cntlid range [6-5]" 00:13:30.579 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:30.579 { 00:13:30.579 "name": "foobar", 00:13:30.579 "method": "nvmf_delete_target", 00:13:30.579 "req_id": 1 00:13:30.579 } 00:13:30.579 Got JSON-RPC error response 00:13:30.579 response: 00:13:30.579 { 00:13:30.579 "code": -32602, 00:13:30.579 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:30.579 }' 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:30.579 { 00:13:30.579 "name": "foobar", 00:13:30.579 "method": "nvmf_delete_target", 00:13:30.579 "req_id": 1 00:13:30.579 } 00:13:30.579 Got JSON-RPC error response 00:13:30.579 response: 00:13:30.579 { 00:13:30.579 "code": -32602, 00:13:30.579 "message": "The specified target doesn't exist, cannot delete it." 00:13:30.579 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:30.579 rmmod nvme_tcp 00:13:30.579 rmmod nvme_fabrics 00:13:30.579 rmmod nvme_keyring 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2333234 ']' 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2333234 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 2333234 ']' 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 2333234 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2333234 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2333234' 00:13:30.579 killing process with pid 2333234 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 2333234 00:13:30.579 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 2333234 00:13:30.840 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:30.840 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:30.840 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:30.840 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:30.840 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:30.840 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:30.840 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:30.840 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:30.840 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:30.840 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.840 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.840 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.755 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:32.755 00:13:32.755 real 0m14.250s 00:13:32.755 user 0m21.001s 00:13:32.755 sys 0m6.760s 00:13:32.755 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:32.755 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:32.755 ************************************ 00:13:32.755 END TEST nvmf_invalid 00:13:32.755 ************************************ 00:13:33.018 13:55:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:33.018 13:55:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:33.018 13:55:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:33.018 13:55:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:33.018 ************************************ 00:13:33.018 START TEST nvmf_connect_stress 00:13:33.018 ************************************ 00:13:33.018 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:33.018 * Looking for test storage... 00:13:33.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:33.018 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:33.018 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:13:33.018 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:33.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.281 --rc genhtml_branch_coverage=1 00:13:33.281 --rc genhtml_function_coverage=1 00:13:33.281 --rc genhtml_legend=1 00:13:33.281 --rc geninfo_all_blocks=1 00:13:33.281 --rc geninfo_unexecuted_blocks=1 00:13:33.281 00:13:33.281 ' 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:33.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.281 --rc genhtml_branch_coverage=1 00:13:33.281 --rc genhtml_function_coverage=1 00:13:33.281 --rc genhtml_legend=1 00:13:33.281 --rc geninfo_all_blocks=1 00:13:33.281 --rc geninfo_unexecuted_blocks=1 00:13:33.281 00:13:33.281 ' 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:33.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.281 --rc genhtml_branch_coverage=1 00:13:33.281 --rc genhtml_function_coverage=1 00:13:33.281 --rc genhtml_legend=1 00:13:33.281 --rc geninfo_all_blocks=1 00:13:33.281 --rc geninfo_unexecuted_blocks=1 00:13:33.281 00:13:33.281 ' 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:33.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.281 --rc genhtml_branch_coverage=1 00:13:33.281 --rc genhtml_function_coverage=1 00:13:33.281 --rc genhtml_legend=1 00:13:33.281 --rc geninfo_all_blocks=1 00:13:33.281 --rc geninfo_unexecuted_blocks=1 00:13:33.281 00:13:33.281 ' 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.281 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:33.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:33.282 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:41.604 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:41.604 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.604 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:41.604 Found net devices under 0000:31:00.0: cvl_0_0 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:41.605 Found net devices under 0000:31:00.1: cvl_0_1 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:41.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:13:41.605 00:13:41.605 --- 10.0.0.2 ping statistics --- 00:13:41.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.605 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:41.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:13:41.605 00:13:41.605 --- 10.0.0.1 ping statistics --- 00:13:41.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.605 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2338465 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2338465 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 2338465 ']' 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:41.605 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.605 [2024-11-06 13:55:27.015968] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:13:41.605 [2024-11-06 13:55:27.016037] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.605 [2024-11-06 13:55:27.118922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:41.605 [2024-11-06 13:55:27.171283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.605 [2024-11-06 13:55:27.171335] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.605 [2024-11-06 13:55:27.171344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.605 [2024-11-06 13:55:27.171351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.605 [2024-11-06 13:55:27.171358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.605 [2024-11-06 13:55:27.173261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.605 [2024-11-06 13:55:27.173402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.605 [2024-11-06 13:55:27.173402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.605 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:41.605 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:13:41.605 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:41.605 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:41.605 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.867 [2024-11-06 13:55:27.898600] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.867 [2024-11-06 13:55:27.924295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.867 NULL1 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2338534 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.867 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.128 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.128 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:42.128 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.128 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.128 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.698 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.698 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:42.698 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.698 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.698 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.958 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.958 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:42.958 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.958 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.958 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.219 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.219 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:43.219 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.219 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.219 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.479 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.479 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:43.479 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.479 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.479 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.740 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.740 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:43.740 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.740 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.740 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.310 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.310 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:44.310 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.310 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.310 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.570 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.570 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:44.570 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.570 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.570 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.830 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.830 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:44.830 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.830 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.830 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.091 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.091 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:45.091 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.091 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.091 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.354 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.354 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:45.354 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.354 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.354 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.924 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.924 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:45.924 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.924 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.924 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.184 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.184 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:46.184 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.184 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.184 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.445 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.445 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:46.445 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.445 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.445 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.704 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.704 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:46.704 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.704 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.704 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.964 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.964 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:46.964 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.964 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.964 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.535 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.535 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:47.535 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.535 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.535 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.795 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.795 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:47.795 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.795 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.795 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.054 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.054 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:48.054 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.054 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.054 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.314 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.314 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:48.314 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.314 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.314 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.574 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.574 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:48.574 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.574 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.574 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.165 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.165 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:49.165 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.165 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.165 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.425 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.425 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:49.425 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.425 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.425 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.685 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.685 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:49.685 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.685 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.685 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.944 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.944 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:49.944 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.944 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.944 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.204 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.204 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:50.204 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.204 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.204 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.464 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.465 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:50.465 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.465 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.465 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.035 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.035 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:51.035 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.035 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.035 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.295 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.295 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:51.295 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.295 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.295 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.555 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.555 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:51.555 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.555 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.555 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.815 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.815 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:51.815 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.815 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.815 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.076 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:52.076 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.076 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2338534 00:13:52.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2338534) - No such process 00:13:52.076 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2338534 00:13:52.076 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:52.076 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:52.076 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:52.076 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:52.076 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:52.076 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:52.076 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:52.076 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:52.076 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:52.076 rmmod nvme_tcp 00:13:52.336 rmmod nvme_fabrics 00:13:52.336 rmmod nvme_keyring 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2338465 ']' 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2338465 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 2338465 ']' 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 2338465 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2338465 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2338465' 00:13:52.336 killing process with pid 2338465 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 2338465 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 2338465 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.336 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:54.876 00:13:54.876 real 0m21.549s 00:13:54.876 user 0m42.590s 00:13:54.876 sys 0m9.352s 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.876 ************************************ 00:13:54.876 END TEST nvmf_connect_stress 00:13:54.876 ************************************ 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:54.876 ************************************ 00:13:54.876 START TEST nvmf_fused_ordering 00:13:54.876 ************************************ 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:54.876 * Looking for test storage... 00:13:54.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:54.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.876 --rc genhtml_branch_coverage=1 00:13:54.876 --rc genhtml_function_coverage=1 00:13:54.876 --rc genhtml_legend=1 00:13:54.876 --rc geninfo_all_blocks=1 00:13:54.876 --rc geninfo_unexecuted_blocks=1 00:13:54.876 00:13:54.876 ' 00:13:54.876 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:54.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.876 --rc genhtml_branch_coverage=1 00:13:54.876 --rc genhtml_function_coverage=1 00:13:54.876 --rc genhtml_legend=1 00:13:54.876 --rc geninfo_all_blocks=1 00:13:54.876 --rc geninfo_unexecuted_blocks=1 00:13:54.876 00:13:54.876 ' 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:54.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.877 --rc genhtml_branch_coverage=1 00:13:54.877 --rc genhtml_function_coverage=1 00:13:54.877 --rc genhtml_legend=1 00:13:54.877 --rc geninfo_all_blocks=1 00:13:54.877 --rc geninfo_unexecuted_blocks=1 00:13:54.877 00:13:54.877 ' 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:54.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.877 --rc genhtml_branch_coverage=1 00:13:54.877 --rc genhtml_function_coverage=1 00:13:54.877 --rc genhtml_legend=1 00:13:54.877 --rc geninfo_all_blocks=1 00:13:54.877 --rc geninfo_unexecuted_blocks=1 00:13:54.877 00:13:54.877 ' 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:54.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:54.877 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:03.010 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:03.010 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:03.010 Found net devices under 0000:31:00.0: cvl_0_0 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.010 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:03.011 Found net devices under 0000:31:00.1: cvl_0_1 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:03.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:14:03.011 00:14:03.011 --- 10.0.0.2 ping statistics --- 00:14:03.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.011 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:03.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:14:03.011 00:14:03.011 --- 10.0.0.1 ping statistics --- 00:14:03.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.011 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2344882 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2344882 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 2344882 ']' 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:03.011 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.011 [2024-11-06 13:55:48.652518] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:14:03.011 [2024-11-06 13:55:48.652586] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.011 [2024-11-06 13:55:48.757627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.011 [2024-11-06 13:55:48.808062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.011 [2024-11-06 13:55:48.808113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.011 [2024-11-06 13:55:48.808121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.011 [2024-11-06 13:55:48.808129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.011 [2024-11-06 13:55:48.808135] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.011 [2024-11-06 13:55:48.808984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.271 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:03.271 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:14:03.271 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:03.271 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:03.271 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.271 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.271 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.271 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.271 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.271 [2024-11-06 13:55:49.540441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.271 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.271 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:03.271 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.271 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.534 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.534 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.534 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.534 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.534 [2024-11-06 13:55:49.564758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.534 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.534 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:03.534 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.534 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.534 NULL1 00:14:03.534 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.534 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:03.534 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.534 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.534 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.534 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:03.534 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.534 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.534 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.534 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:03.534 [2024-11-06 13:55:49.634446] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:14:03.534 [2024-11-06 13:55:49.634493] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2345155 ] 00:14:03.796 Attached to nqn.2016-06.io.spdk:cnode1 00:14:03.796 Namespace ID: 1 size: 1GB 00:14:03.796 fused_ordering(0) 00:14:03.796 fused_ordering(1) 00:14:03.796 fused_ordering(2) 00:14:03.796 fused_ordering(3) 00:14:03.796 fused_ordering(4) 00:14:03.796 fused_ordering(5) 00:14:03.796 fused_ordering(6) 00:14:03.796 fused_ordering(7) 00:14:03.796 fused_ordering(8) 00:14:03.796 fused_ordering(9) 00:14:03.796 fused_ordering(10) 00:14:03.796 fused_ordering(11) 00:14:03.796 fused_ordering(12) 00:14:03.796 fused_ordering(13) 00:14:03.796 fused_ordering(14) 00:14:03.796 fused_ordering(15) 00:14:03.796 fused_ordering(16) 00:14:03.796 fused_ordering(17) 00:14:03.796 fused_ordering(18) 00:14:03.796 fused_ordering(19) 00:14:03.796 fused_ordering(20) 00:14:03.796 fused_ordering(21) 00:14:03.796 fused_ordering(22) 00:14:03.796 fused_ordering(23) 00:14:03.796 fused_ordering(24) 00:14:03.796 fused_ordering(25) 00:14:03.796 fused_ordering(26) 00:14:03.796 fused_ordering(27) 00:14:03.796 fused_ordering(28) 00:14:03.796 fused_ordering(29) 00:14:03.796 fused_ordering(30) 00:14:03.796 fused_ordering(31) 00:14:03.796 fused_ordering(32) 00:14:03.796 fused_ordering(33) 00:14:03.796 fused_ordering(34) 00:14:03.796 fused_ordering(35) 00:14:03.796 fused_ordering(36) 00:14:03.796 fused_ordering(37) 00:14:03.796 fused_ordering(38) 00:14:03.796 fused_ordering(39) 00:14:03.796 fused_ordering(40) 00:14:03.796 fused_ordering(41) 00:14:03.796 fused_ordering(42) 00:14:03.796 fused_ordering(43) 00:14:03.796 fused_ordering(44) 00:14:03.796 fused_ordering(45) 00:14:03.796 fused_ordering(46) 00:14:03.796 fused_ordering(47) 00:14:03.796 fused_ordering(48) 00:14:03.796 fused_ordering(49) 00:14:03.796 fused_ordering(50) 00:14:03.796 fused_ordering(51) 00:14:03.796 fused_ordering(52) 00:14:03.796 fused_ordering(53) 00:14:03.796 fused_ordering(54) 00:14:03.796 fused_ordering(55) 00:14:03.796 fused_ordering(56) 00:14:03.796 fused_ordering(57) 00:14:03.796 fused_ordering(58) 00:14:03.796 fused_ordering(59) 00:14:03.796 fused_ordering(60) 00:14:03.796 fused_ordering(61) 00:14:03.796 fused_ordering(62) 00:14:03.796 fused_ordering(63) 00:14:03.796 fused_ordering(64) 00:14:03.796 fused_ordering(65) 00:14:03.796 fused_ordering(66) 00:14:03.796 fused_ordering(67) 00:14:03.796 fused_ordering(68) 00:14:03.796 fused_ordering(69) 00:14:03.796 fused_ordering(70) 00:14:03.796 fused_ordering(71) 00:14:03.796 fused_ordering(72) 00:14:03.796 fused_ordering(73) 00:14:03.796 fused_ordering(74) 00:14:03.796 fused_ordering(75) 00:14:03.796 fused_ordering(76) 00:14:03.796 fused_ordering(77) 00:14:03.796 fused_ordering(78) 00:14:03.796 fused_ordering(79) 00:14:03.796 fused_ordering(80) 00:14:03.796 fused_ordering(81) 00:14:03.796 fused_ordering(82) 00:14:03.796 fused_ordering(83) 00:14:03.796 fused_ordering(84) 00:14:03.796 fused_ordering(85) 00:14:03.796 fused_ordering(86) 00:14:03.796 fused_ordering(87) 00:14:03.796 fused_ordering(88) 00:14:03.796 fused_ordering(89) 00:14:03.796 fused_ordering(90) 00:14:03.796 fused_ordering(91) 00:14:03.796 fused_ordering(92) 00:14:03.796 fused_ordering(93) 00:14:03.796 fused_ordering(94) 00:14:03.796 fused_ordering(95) 00:14:03.796 fused_ordering(96) 00:14:03.796 fused_ordering(97) 00:14:03.796 fused_ordering(98) 00:14:03.796 fused_ordering(99) 00:14:03.796 fused_ordering(100) 00:14:03.797 fused_ordering(101) 00:14:03.797 fused_ordering(102) 00:14:03.797 fused_ordering(103) 00:14:03.797 fused_ordering(104) 00:14:03.797 fused_ordering(105) 00:14:03.797 fused_ordering(106) 00:14:03.797 fused_ordering(107) 00:14:03.797 fused_ordering(108) 00:14:03.797 fused_ordering(109) 00:14:03.797 fused_ordering(110) 00:14:03.797 fused_ordering(111) 00:14:03.797 fused_ordering(112) 00:14:03.797 fused_ordering(113) 00:14:03.797 fused_ordering(114) 00:14:03.797 fused_ordering(115) 00:14:03.797 fused_ordering(116) 00:14:03.797 fused_ordering(117) 00:14:03.797 fused_ordering(118) 00:14:03.797 fused_ordering(119) 00:14:03.797 fused_ordering(120) 00:14:03.797 fused_ordering(121) 00:14:03.797 fused_ordering(122) 00:14:03.797 fused_ordering(123) 00:14:03.797 fused_ordering(124) 00:14:03.797 fused_ordering(125) 00:14:03.797 fused_ordering(126) 00:14:03.797 fused_ordering(127) 00:14:03.797 fused_ordering(128) 00:14:03.797 fused_ordering(129) 00:14:03.797 fused_ordering(130) 00:14:03.797 fused_ordering(131) 00:14:03.797 fused_ordering(132) 00:14:03.797 fused_ordering(133) 00:14:03.797 fused_ordering(134) 00:14:03.797 fused_ordering(135) 00:14:03.797 fused_ordering(136) 00:14:03.797 fused_ordering(137) 00:14:03.797 fused_ordering(138) 00:14:03.797 fused_ordering(139) 00:14:03.797 fused_ordering(140) 00:14:03.797 fused_ordering(141) 00:14:03.797 fused_ordering(142) 00:14:03.797 fused_ordering(143) 00:14:03.797 fused_ordering(144) 00:14:03.797 fused_ordering(145) 00:14:03.797 fused_ordering(146) 00:14:03.797 fused_ordering(147) 00:14:03.797 fused_ordering(148) 00:14:03.797 fused_ordering(149) 00:14:03.797 fused_ordering(150) 00:14:03.797 fused_ordering(151) 00:14:03.797 fused_ordering(152) 00:14:03.797 fused_ordering(153) 00:14:03.797 fused_ordering(154) 00:14:03.797 fused_ordering(155) 00:14:03.797 fused_ordering(156) 00:14:03.797 fused_ordering(157) 00:14:03.797 fused_ordering(158) 00:14:03.797 fused_ordering(159) 00:14:03.797 fused_ordering(160) 00:14:03.797 fused_ordering(161) 00:14:03.797 fused_ordering(162) 00:14:03.797 fused_ordering(163) 00:14:03.797 fused_ordering(164) 00:14:03.797 fused_ordering(165) 00:14:03.797 fused_ordering(166) 00:14:03.797 fused_ordering(167) 00:14:03.797 fused_ordering(168) 00:14:03.797 fused_ordering(169) 00:14:03.797 fused_ordering(170) 00:14:03.797 fused_ordering(171) 00:14:03.797 fused_ordering(172) 00:14:03.797 fused_ordering(173) 00:14:03.797 fused_ordering(174) 00:14:03.797 fused_ordering(175) 00:14:03.797 fused_ordering(176) 00:14:03.797 fused_ordering(177) 00:14:03.797 fused_ordering(178) 00:14:03.797 fused_ordering(179) 00:14:03.797 fused_ordering(180) 00:14:03.797 fused_ordering(181) 00:14:03.797 fused_ordering(182) 00:14:03.797 fused_ordering(183) 00:14:03.797 fused_ordering(184) 00:14:03.797 fused_ordering(185) 00:14:03.797 fused_ordering(186) 00:14:03.797 fused_ordering(187) 00:14:03.797 fused_ordering(188) 00:14:03.797 fused_ordering(189) 00:14:03.797 fused_ordering(190) 00:14:03.797 fused_ordering(191) 00:14:03.797 fused_ordering(192) 00:14:03.797 fused_ordering(193) 00:14:03.797 fused_ordering(194) 00:14:03.797 fused_ordering(195) 00:14:03.797 fused_ordering(196) 00:14:03.797 fused_ordering(197) 00:14:03.797 fused_ordering(198) 00:14:03.797 fused_ordering(199) 00:14:03.797 fused_ordering(200) 00:14:03.797 fused_ordering(201) 00:14:03.797 fused_ordering(202) 00:14:03.797 fused_ordering(203) 00:14:03.797 fused_ordering(204) 00:14:03.797 fused_ordering(205) 00:14:04.368 fused_ordering(206) 00:14:04.368 fused_ordering(207) 00:14:04.368 fused_ordering(208) 00:14:04.368 fused_ordering(209) 00:14:04.368 fused_ordering(210) 00:14:04.368 fused_ordering(211) 00:14:04.368 fused_ordering(212) 00:14:04.368 fused_ordering(213) 00:14:04.368 fused_ordering(214) 00:14:04.368 fused_ordering(215) 00:14:04.368 fused_ordering(216) 00:14:04.368 fused_ordering(217) 00:14:04.368 fused_ordering(218) 00:14:04.368 fused_ordering(219) 00:14:04.368 fused_ordering(220) 00:14:04.368 fused_ordering(221) 00:14:04.368 fused_ordering(222) 00:14:04.368 fused_ordering(223) 00:14:04.368 fused_ordering(224) 00:14:04.368 fused_ordering(225) 00:14:04.368 fused_ordering(226) 00:14:04.368 fused_ordering(227) 00:14:04.368 fused_ordering(228) 00:14:04.368 fused_ordering(229) 00:14:04.368 fused_ordering(230) 00:14:04.368 fused_ordering(231) 00:14:04.368 fused_ordering(232) 00:14:04.368 fused_ordering(233) 00:14:04.368 fused_ordering(234) 00:14:04.368 fused_ordering(235) 00:14:04.368 fused_ordering(236) 00:14:04.368 fused_ordering(237) 00:14:04.368 fused_ordering(238) 00:14:04.368 fused_ordering(239) 00:14:04.368 fused_ordering(240) 00:14:04.368 fused_ordering(241) 00:14:04.368 fused_ordering(242) 00:14:04.368 fused_ordering(243) 00:14:04.368 fused_ordering(244) 00:14:04.368 fused_ordering(245) 00:14:04.368 fused_ordering(246) 00:14:04.368 fused_ordering(247) 00:14:04.368 fused_ordering(248) 00:14:04.368 fused_ordering(249) 00:14:04.368 fused_ordering(250) 00:14:04.368 fused_ordering(251) 00:14:04.368 fused_ordering(252) 00:14:04.368 fused_ordering(253) 00:14:04.368 fused_ordering(254) 00:14:04.368 fused_ordering(255) 00:14:04.368 fused_ordering(256) 00:14:04.368 fused_ordering(257) 00:14:04.368 fused_ordering(258) 00:14:04.368 fused_ordering(259) 00:14:04.368 fused_ordering(260) 00:14:04.368 fused_ordering(261) 00:14:04.368 fused_ordering(262) 00:14:04.368 fused_ordering(263) 00:14:04.368 fused_ordering(264) 00:14:04.368 fused_ordering(265) 00:14:04.368 fused_ordering(266) 00:14:04.368 fused_ordering(267) 00:14:04.368 fused_ordering(268) 00:14:04.368 fused_ordering(269) 00:14:04.368 fused_ordering(270) 00:14:04.368 fused_ordering(271) 00:14:04.368 fused_ordering(272) 00:14:04.368 fused_ordering(273) 00:14:04.368 fused_ordering(274) 00:14:04.368 fused_ordering(275) 00:14:04.368 fused_ordering(276) 00:14:04.368 fused_ordering(277) 00:14:04.368 fused_ordering(278) 00:14:04.368 fused_ordering(279) 00:14:04.368 fused_ordering(280) 00:14:04.368 fused_ordering(281) 00:14:04.368 fused_ordering(282) 00:14:04.368 fused_ordering(283) 00:14:04.368 fused_ordering(284) 00:14:04.368 fused_ordering(285) 00:14:04.368 fused_ordering(286) 00:14:04.368 fused_ordering(287) 00:14:04.368 fused_ordering(288) 00:14:04.368 fused_ordering(289) 00:14:04.368 fused_ordering(290) 00:14:04.368 fused_ordering(291) 00:14:04.368 fused_ordering(292) 00:14:04.368 fused_ordering(293) 00:14:04.368 fused_ordering(294) 00:14:04.368 fused_ordering(295) 00:14:04.368 fused_ordering(296) 00:14:04.368 fused_ordering(297) 00:14:04.368 fused_ordering(298) 00:14:04.368 fused_ordering(299) 00:14:04.368 fused_ordering(300) 00:14:04.368 fused_ordering(301) 00:14:04.368 fused_ordering(302) 00:14:04.368 fused_ordering(303) 00:14:04.368 fused_ordering(304) 00:14:04.368 fused_ordering(305) 00:14:04.368 fused_ordering(306) 00:14:04.368 fused_ordering(307) 00:14:04.368 fused_ordering(308) 00:14:04.368 fused_ordering(309) 00:14:04.368 fused_ordering(310) 00:14:04.368 fused_ordering(311) 00:14:04.368 fused_ordering(312) 00:14:04.368 fused_ordering(313) 00:14:04.368 fused_ordering(314) 00:14:04.368 fused_ordering(315) 00:14:04.368 fused_ordering(316) 00:14:04.368 fused_ordering(317) 00:14:04.368 fused_ordering(318) 00:14:04.368 fused_ordering(319) 00:14:04.368 fused_ordering(320) 00:14:04.368 fused_ordering(321) 00:14:04.368 fused_ordering(322) 00:14:04.368 fused_ordering(323) 00:14:04.368 fused_ordering(324) 00:14:04.368 fused_ordering(325) 00:14:04.368 fused_ordering(326) 00:14:04.368 fused_ordering(327) 00:14:04.368 fused_ordering(328) 00:14:04.368 fused_ordering(329) 00:14:04.368 fused_ordering(330) 00:14:04.368 fused_ordering(331) 00:14:04.368 fused_ordering(332) 00:14:04.368 fused_ordering(333) 00:14:04.368 fused_ordering(334) 00:14:04.368 fused_ordering(335) 00:14:04.368 fused_ordering(336) 00:14:04.368 fused_ordering(337) 00:14:04.368 fused_ordering(338) 00:14:04.368 fused_ordering(339) 00:14:04.368 fused_ordering(340) 00:14:04.368 fused_ordering(341) 00:14:04.368 fused_ordering(342) 00:14:04.368 fused_ordering(343) 00:14:04.368 fused_ordering(344) 00:14:04.368 fused_ordering(345) 00:14:04.368 fused_ordering(346) 00:14:04.368 fused_ordering(347) 00:14:04.368 fused_ordering(348) 00:14:04.368 fused_ordering(349) 00:14:04.368 fused_ordering(350) 00:14:04.368 fused_ordering(351) 00:14:04.368 fused_ordering(352) 00:14:04.368 fused_ordering(353) 00:14:04.368 fused_ordering(354) 00:14:04.368 fused_ordering(355) 00:14:04.368 fused_ordering(356) 00:14:04.368 fused_ordering(357) 00:14:04.368 fused_ordering(358) 00:14:04.368 fused_ordering(359) 00:14:04.368 fused_ordering(360) 00:14:04.368 fused_ordering(361) 00:14:04.368 fused_ordering(362) 00:14:04.368 fused_ordering(363) 00:14:04.368 fused_ordering(364) 00:14:04.368 fused_ordering(365) 00:14:04.368 fused_ordering(366) 00:14:04.368 fused_ordering(367) 00:14:04.368 fused_ordering(368) 00:14:04.368 fused_ordering(369) 00:14:04.368 fused_ordering(370) 00:14:04.368 fused_ordering(371) 00:14:04.368 fused_ordering(372) 00:14:04.368 fused_ordering(373) 00:14:04.368 fused_ordering(374) 00:14:04.368 fused_ordering(375) 00:14:04.368 fused_ordering(376) 00:14:04.368 fused_ordering(377) 00:14:04.368 fused_ordering(378) 00:14:04.368 fused_ordering(379) 00:14:04.368 fused_ordering(380) 00:14:04.368 fused_ordering(381) 00:14:04.368 fused_ordering(382) 00:14:04.368 fused_ordering(383) 00:14:04.368 fused_ordering(384) 00:14:04.368 fused_ordering(385) 00:14:04.368 fused_ordering(386) 00:14:04.368 fused_ordering(387) 00:14:04.368 fused_ordering(388) 00:14:04.368 fused_ordering(389) 00:14:04.368 fused_ordering(390) 00:14:04.368 fused_ordering(391) 00:14:04.368 fused_ordering(392) 00:14:04.368 fused_ordering(393) 00:14:04.368 fused_ordering(394) 00:14:04.368 fused_ordering(395) 00:14:04.368 fused_ordering(396) 00:14:04.368 fused_ordering(397) 00:14:04.368 fused_ordering(398) 00:14:04.368 fused_ordering(399) 00:14:04.368 fused_ordering(400) 00:14:04.368 fused_ordering(401) 00:14:04.368 fused_ordering(402) 00:14:04.368 fused_ordering(403) 00:14:04.368 fused_ordering(404) 00:14:04.368 fused_ordering(405) 00:14:04.368 fused_ordering(406) 00:14:04.368 fused_ordering(407) 00:14:04.368 fused_ordering(408) 00:14:04.368 fused_ordering(409) 00:14:04.368 fused_ordering(410) 00:14:04.629 fused_ordering(411) 00:14:04.629 fused_ordering(412) 00:14:04.629 fused_ordering(413) 00:14:04.629 fused_ordering(414) 00:14:04.629 fused_ordering(415) 00:14:04.629 fused_ordering(416) 00:14:04.629 fused_ordering(417) 00:14:04.629 fused_ordering(418) 00:14:04.629 fused_ordering(419) 00:14:04.629 fused_ordering(420) 00:14:04.629 fused_ordering(421) 00:14:04.629 fused_ordering(422) 00:14:04.629 fused_ordering(423) 00:14:04.629 fused_ordering(424) 00:14:04.629 fused_ordering(425) 00:14:04.629 fused_ordering(426) 00:14:04.629 fused_ordering(427) 00:14:04.629 fused_ordering(428) 00:14:04.629 fused_ordering(429) 00:14:04.629 fused_ordering(430) 00:14:04.629 fused_ordering(431) 00:14:04.629 fused_ordering(432) 00:14:04.629 fused_ordering(433) 00:14:04.629 fused_ordering(434) 00:14:04.629 fused_ordering(435) 00:14:04.629 fused_ordering(436) 00:14:04.629 fused_ordering(437) 00:14:04.629 fused_ordering(438) 00:14:04.629 fused_ordering(439) 00:14:04.629 fused_ordering(440) 00:14:04.629 fused_ordering(441) 00:14:04.629 fused_ordering(442) 00:14:04.629 fused_ordering(443) 00:14:04.629 fused_ordering(444) 00:14:04.629 fused_ordering(445) 00:14:04.629 fused_ordering(446) 00:14:04.629 fused_ordering(447) 00:14:04.629 fused_ordering(448) 00:14:04.629 fused_ordering(449) 00:14:04.629 fused_ordering(450) 00:14:04.629 fused_ordering(451) 00:14:04.629 fused_ordering(452) 00:14:04.629 fused_ordering(453) 00:14:04.629 fused_ordering(454) 00:14:04.629 fused_ordering(455) 00:14:04.629 fused_ordering(456) 00:14:04.629 fused_ordering(457) 00:14:04.629 fused_ordering(458) 00:14:04.629 fused_ordering(459) 00:14:04.629 fused_ordering(460) 00:14:04.629 fused_ordering(461) 00:14:04.629 fused_ordering(462) 00:14:04.629 fused_ordering(463) 00:14:04.629 fused_ordering(464) 00:14:04.629 fused_ordering(465) 00:14:04.629 fused_ordering(466) 00:14:04.629 fused_ordering(467) 00:14:04.629 fused_ordering(468) 00:14:04.629 fused_ordering(469) 00:14:04.629 fused_ordering(470) 00:14:04.629 fused_ordering(471) 00:14:04.629 fused_ordering(472) 00:14:04.629 fused_ordering(473) 00:14:04.629 fused_ordering(474) 00:14:04.629 fused_ordering(475) 00:14:04.629 fused_ordering(476) 00:14:04.629 fused_ordering(477) 00:14:04.629 fused_ordering(478) 00:14:04.629 fused_ordering(479) 00:14:04.629 fused_ordering(480) 00:14:04.629 fused_ordering(481) 00:14:04.629 fused_ordering(482) 00:14:04.629 fused_ordering(483) 00:14:04.629 fused_ordering(484) 00:14:04.629 fused_ordering(485) 00:14:04.629 fused_ordering(486) 00:14:04.629 fused_ordering(487) 00:14:04.629 fused_ordering(488) 00:14:04.629 fused_ordering(489) 00:14:04.629 fused_ordering(490) 00:14:04.629 fused_ordering(491) 00:14:04.629 fused_ordering(492) 00:14:04.629 fused_ordering(493) 00:14:04.629 fused_ordering(494) 00:14:04.629 fused_ordering(495) 00:14:04.629 fused_ordering(496) 00:14:04.629 fused_ordering(497) 00:14:04.629 fused_ordering(498) 00:14:04.629 fused_ordering(499) 00:14:04.629 fused_ordering(500) 00:14:04.629 fused_ordering(501) 00:14:04.629 fused_ordering(502) 00:14:04.629 fused_ordering(503) 00:14:04.629 fused_ordering(504) 00:14:04.629 fused_ordering(505) 00:14:04.629 fused_ordering(506) 00:14:04.629 fused_ordering(507) 00:14:04.629 fused_ordering(508) 00:14:04.629 fused_ordering(509) 00:14:04.629 fused_ordering(510) 00:14:04.629 fused_ordering(511) 00:14:04.629 fused_ordering(512) 00:14:04.629 fused_ordering(513) 00:14:04.629 fused_ordering(514) 00:14:04.629 fused_ordering(515) 00:14:04.629 fused_ordering(516) 00:14:04.629 fused_ordering(517) 00:14:04.629 fused_ordering(518) 00:14:04.629 fused_ordering(519) 00:14:04.629 fused_ordering(520) 00:14:04.629 fused_ordering(521) 00:14:04.629 fused_ordering(522) 00:14:04.629 fused_ordering(523) 00:14:04.629 fused_ordering(524) 00:14:04.629 fused_ordering(525) 00:14:04.629 fused_ordering(526) 00:14:04.629 fused_ordering(527) 00:14:04.629 fused_ordering(528) 00:14:04.629 fused_ordering(529) 00:14:04.629 fused_ordering(530) 00:14:04.629 fused_ordering(531) 00:14:04.629 fused_ordering(532) 00:14:04.629 fused_ordering(533) 00:14:04.629 fused_ordering(534) 00:14:04.629 fused_ordering(535) 00:14:04.629 fused_ordering(536) 00:14:04.630 fused_ordering(537) 00:14:04.630 fused_ordering(538) 00:14:04.630 fused_ordering(539) 00:14:04.630 fused_ordering(540) 00:14:04.630 fused_ordering(541) 00:14:04.630 fused_ordering(542) 00:14:04.630 fused_ordering(543) 00:14:04.630 fused_ordering(544) 00:14:04.630 fused_ordering(545) 00:14:04.630 fused_ordering(546) 00:14:04.630 fused_ordering(547) 00:14:04.630 fused_ordering(548) 00:14:04.630 fused_ordering(549) 00:14:04.630 fused_ordering(550) 00:14:04.630 fused_ordering(551) 00:14:04.630 fused_ordering(552) 00:14:04.630 fused_ordering(553) 00:14:04.630 fused_ordering(554) 00:14:04.630 fused_ordering(555) 00:14:04.630 fused_ordering(556) 00:14:04.630 fused_ordering(557) 00:14:04.630 fused_ordering(558) 00:14:04.630 fused_ordering(559) 00:14:04.630 fused_ordering(560) 00:14:04.630 fused_ordering(561) 00:14:04.630 fused_ordering(562) 00:14:04.630 fused_ordering(563) 00:14:04.630 fused_ordering(564) 00:14:04.630 fused_ordering(565) 00:14:04.630 fused_ordering(566) 00:14:04.630 fused_ordering(567) 00:14:04.630 fused_ordering(568) 00:14:04.630 fused_ordering(569) 00:14:04.630 fused_ordering(570) 00:14:04.630 fused_ordering(571) 00:14:04.630 fused_ordering(572) 00:14:04.630 fused_ordering(573) 00:14:04.630 fused_ordering(574) 00:14:04.630 fused_ordering(575) 00:14:04.630 fused_ordering(576) 00:14:04.630 fused_ordering(577) 00:14:04.630 fused_ordering(578) 00:14:04.630 fused_ordering(579) 00:14:04.630 fused_ordering(580) 00:14:04.630 fused_ordering(581) 00:14:04.630 fused_ordering(582) 00:14:04.630 fused_ordering(583) 00:14:04.630 fused_ordering(584) 00:14:04.630 fused_ordering(585) 00:14:04.630 fused_ordering(586) 00:14:04.630 fused_ordering(587) 00:14:04.630 fused_ordering(588) 00:14:04.630 fused_ordering(589) 00:14:04.630 fused_ordering(590) 00:14:04.630 fused_ordering(591) 00:14:04.630 fused_ordering(592) 00:14:04.630 fused_ordering(593) 00:14:04.630 fused_ordering(594) 00:14:04.630 fused_ordering(595) 00:14:04.630 fused_ordering(596) 00:14:04.630 fused_ordering(597) 00:14:04.630 fused_ordering(598) 00:14:04.630 fused_ordering(599) 00:14:04.630 fused_ordering(600) 00:14:04.630 fused_ordering(601) 00:14:04.630 fused_ordering(602) 00:14:04.630 fused_ordering(603) 00:14:04.630 fused_ordering(604) 00:14:04.630 fused_ordering(605) 00:14:04.630 fused_ordering(606) 00:14:04.630 fused_ordering(607) 00:14:04.630 fused_ordering(608) 00:14:04.630 fused_ordering(609) 00:14:04.630 fused_ordering(610) 00:14:04.630 fused_ordering(611) 00:14:04.630 fused_ordering(612) 00:14:04.630 fused_ordering(613) 00:14:04.630 fused_ordering(614) 00:14:04.630 fused_ordering(615) 00:14:05.200 fused_ordering(616) 00:14:05.200 fused_ordering(617) 00:14:05.200 fused_ordering(618) 00:14:05.200 fused_ordering(619) 00:14:05.200 fused_ordering(620) 00:14:05.200 fused_ordering(621) 00:14:05.200 fused_ordering(622) 00:14:05.200 fused_ordering(623) 00:14:05.200 fused_ordering(624) 00:14:05.200 fused_ordering(625) 00:14:05.200 fused_ordering(626) 00:14:05.200 fused_ordering(627) 00:14:05.200 fused_ordering(628) 00:14:05.200 fused_ordering(629) 00:14:05.200 fused_ordering(630) 00:14:05.200 fused_ordering(631) 00:14:05.200 fused_ordering(632) 00:14:05.200 fused_ordering(633) 00:14:05.200 fused_ordering(634) 00:14:05.200 fused_ordering(635) 00:14:05.200 fused_ordering(636) 00:14:05.200 fused_ordering(637) 00:14:05.200 fused_ordering(638) 00:14:05.200 fused_ordering(639) 00:14:05.200 fused_ordering(640) 00:14:05.200 fused_ordering(641) 00:14:05.200 fused_ordering(642) 00:14:05.200 fused_ordering(643) 00:14:05.200 fused_ordering(644) 00:14:05.200 fused_ordering(645) 00:14:05.200 fused_ordering(646) 00:14:05.200 fused_ordering(647) 00:14:05.200 fused_ordering(648) 00:14:05.200 fused_ordering(649) 00:14:05.200 fused_ordering(650) 00:14:05.200 fused_ordering(651) 00:14:05.200 fused_ordering(652) 00:14:05.200 fused_ordering(653) 00:14:05.200 fused_ordering(654) 00:14:05.200 fused_ordering(655) 00:14:05.200 fused_ordering(656) 00:14:05.200 fused_ordering(657) 00:14:05.200 fused_ordering(658) 00:14:05.200 fused_ordering(659) 00:14:05.200 fused_ordering(660) 00:14:05.200 fused_ordering(661) 00:14:05.200 fused_ordering(662) 00:14:05.200 fused_ordering(663) 00:14:05.200 fused_ordering(664) 00:14:05.200 fused_ordering(665) 00:14:05.200 fused_ordering(666) 00:14:05.200 fused_ordering(667) 00:14:05.200 fused_ordering(668) 00:14:05.200 fused_ordering(669) 00:14:05.200 fused_ordering(670) 00:14:05.200 fused_ordering(671) 00:14:05.200 fused_ordering(672) 00:14:05.200 fused_ordering(673) 00:14:05.200 fused_ordering(674) 00:14:05.200 fused_ordering(675) 00:14:05.200 fused_ordering(676) 00:14:05.201 fused_ordering(677) 00:14:05.201 fused_ordering(678) 00:14:05.201 fused_ordering(679) 00:14:05.201 fused_ordering(680) 00:14:05.201 fused_ordering(681) 00:14:05.201 fused_ordering(682) 00:14:05.201 fused_ordering(683) 00:14:05.201 fused_ordering(684) 00:14:05.201 fused_ordering(685) 00:14:05.201 fused_ordering(686) 00:14:05.201 fused_ordering(687) 00:14:05.201 fused_ordering(688) 00:14:05.201 fused_ordering(689) 00:14:05.201 fused_ordering(690) 00:14:05.201 fused_ordering(691) 00:14:05.201 fused_ordering(692) 00:14:05.201 fused_ordering(693) 00:14:05.201 fused_ordering(694) 00:14:05.201 fused_ordering(695) 00:14:05.201 fused_ordering(696) 00:14:05.201 fused_ordering(697) 00:14:05.201 fused_ordering(698) 00:14:05.201 fused_ordering(699) 00:14:05.201 fused_ordering(700) 00:14:05.201 fused_ordering(701) 00:14:05.201 fused_ordering(702) 00:14:05.201 fused_ordering(703) 00:14:05.201 fused_ordering(704) 00:14:05.201 fused_ordering(705) 00:14:05.201 fused_ordering(706) 00:14:05.201 fused_ordering(707) 00:14:05.201 fused_ordering(708) 00:14:05.201 fused_ordering(709) 00:14:05.201 fused_ordering(710) 00:14:05.201 fused_ordering(711) 00:14:05.201 fused_ordering(712) 00:14:05.201 fused_ordering(713) 00:14:05.201 fused_ordering(714) 00:14:05.201 fused_ordering(715) 00:14:05.201 fused_ordering(716) 00:14:05.201 fused_ordering(717) 00:14:05.201 fused_ordering(718) 00:14:05.201 fused_ordering(719) 00:14:05.201 fused_ordering(720) 00:14:05.201 fused_ordering(721) 00:14:05.201 fused_ordering(722) 00:14:05.201 fused_ordering(723) 00:14:05.201 fused_ordering(724) 00:14:05.201 fused_ordering(725) 00:14:05.201 fused_ordering(726) 00:14:05.201 fused_ordering(727) 00:14:05.201 fused_ordering(728) 00:14:05.201 fused_ordering(729) 00:14:05.201 fused_ordering(730) 00:14:05.201 fused_ordering(731) 00:14:05.201 fused_ordering(732) 00:14:05.201 fused_ordering(733) 00:14:05.201 fused_ordering(734) 00:14:05.201 fused_ordering(735) 00:14:05.201 fused_ordering(736) 00:14:05.201 fused_ordering(737) 00:14:05.201 fused_ordering(738) 00:14:05.201 fused_ordering(739) 00:14:05.201 fused_ordering(740) 00:14:05.201 fused_ordering(741) 00:14:05.201 fused_ordering(742) 00:14:05.201 fused_ordering(743) 00:14:05.201 fused_ordering(744) 00:14:05.201 fused_ordering(745) 00:14:05.201 fused_ordering(746) 00:14:05.201 fused_ordering(747) 00:14:05.201 fused_ordering(748) 00:14:05.201 fused_ordering(749) 00:14:05.201 fused_ordering(750) 00:14:05.201 fused_ordering(751) 00:14:05.201 fused_ordering(752) 00:14:05.201 fused_ordering(753) 00:14:05.201 fused_ordering(754) 00:14:05.201 fused_ordering(755) 00:14:05.201 fused_ordering(756) 00:14:05.201 fused_ordering(757) 00:14:05.201 fused_ordering(758) 00:14:05.201 fused_ordering(759) 00:14:05.201 fused_ordering(760) 00:14:05.201 fused_ordering(761) 00:14:05.201 fused_ordering(762) 00:14:05.201 fused_ordering(763) 00:14:05.201 fused_ordering(764) 00:14:05.201 fused_ordering(765) 00:14:05.201 fused_ordering(766) 00:14:05.201 fused_ordering(767) 00:14:05.201 fused_ordering(768) 00:14:05.201 fused_ordering(769) 00:14:05.201 fused_ordering(770) 00:14:05.201 fused_ordering(771) 00:14:05.201 fused_ordering(772) 00:14:05.201 fused_ordering(773) 00:14:05.201 fused_ordering(774) 00:14:05.201 fused_ordering(775) 00:14:05.201 fused_ordering(776) 00:14:05.201 fused_ordering(777) 00:14:05.201 fused_ordering(778) 00:14:05.201 fused_ordering(779) 00:14:05.201 fused_ordering(780) 00:14:05.201 fused_ordering(781) 00:14:05.201 fused_ordering(782) 00:14:05.201 fused_ordering(783) 00:14:05.201 fused_ordering(784) 00:14:05.201 fused_ordering(785) 00:14:05.201 fused_ordering(786) 00:14:05.201 fused_ordering(787) 00:14:05.201 fused_ordering(788) 00:14:05.201 fused_ordering(789) 00:14:05.201 fused_ordering(790) 00:14:05.201 fused_ordering(791) 00:14:05.201 fused_ordering(792) 00:14:05.201 fused_ordering(793) 00:14:05.201 fused_ordering(794) 00:14:05.201 fused_ordering(795) 00:14:05.201 fused_ordering(796) 00:14:05.201 fused_ordering(797) 00:14:05.201 fused_ordering(798) 00:14:05.201 fused_ordering(799) 00:14:05.201 fused_ordering(800) 00:14:05.201 fused_ordering(801) 00:14:05.201 fused_ordering(802) 00:14:05.201 fused_ordering(803) 00:14:05.201 fused_ordering(804) 00:14:05.201 fused_ordering(805) 00:14:05.201 fused_ordering(806) 00:14:05.201 fused_ordering(807) 00:14:05.201 fused_ordering(808) 00:14:05.201 fused_ordering(809) 00:14:05.201 fused_ordering(810) 00:14:05.201 fused_ordering(811) 00:14:05.201 fused_ordering(812) 00:14:05.201 fused_ordering(813) 00:14:05.201 fused_ordering(814) 00:14:05.201 fused_ordering(815) 00:14:05.201 fused_ordering(816) 00:14:05.201 fused_ordering(817) 00:14:05.201 fused_ordering(818) 00:14:05.201 fused_ordering(819) 00:14:05.201 fused_ordering(820) 00:14:06.141 fused_ordering(821) 00:14:06.141 fused_ordering(822) 00:14:06.141 fused_ordering(823) 00:14:06.141 fused_ordering(824) 00:14:06.141 fused_ordering(825) 00:14:06.141 fused_ordering(826) 00:14:06.141 fused_ordering(827) 00:14:06.141 fused_ordering(828) 00:14:06.141 fused_ordering(829) 00:14:06.141 fused_ordering(830) 00:14:06.141 fused_ordering(831) 00:14:06.141 fused_ordering(832) 00:14:06.141 fused_ordering(833) 00:14:06.141 fused_ordering(834) 00:14:06.141 fused_ordering(835) 00:14:06.141 fused_ordering(836) 00:14:06.141 fused_ordering(837) 00:14:06.141 fused_ordering(838) 00:14:06.141 fused_ordering(839) 00:14:06.141 fused_ordering(840) 00:14:06.141 fused_ordering(841) 00:14:06.141 fused_ordering(842) 00:14:06.141 fused_ordering(843) 00:14:06.141 fused_ordering(844) 00:14:06.141 fused_ordering(845) 00:14:06.141 fused_ordering(846) 00:14:06.141 fused_ordering(847) 00:14:06.141 fused_ordering(848) 00:14:06.141 fused_ordering(849) 00:14:06.141 fused_ordering(850) 00:14:06.142 fused_ordering(851) 00:14:06.142 fused_ordering(852) 00:14:06.142 fused_ordering(853) 00:14:06.142 fused_ordering(854) 00:14:06.142 fused_ordering(855) 00:14:06.142 fused_ordering(856) 00:14:06.142 fused_ordering(857) 00:14:06.142 fused_ordering(858) 00:14:06.142 fused_ordering(859) 00:14:06.142 fused_ordering(860) 00:14:06.142 fused_ordering(861) 00:14:06.142 fused_ordering(862) 00:14:06.142 fused_ordering(863) 00:14:06.142 fused_ordering(864) 00:14:06.142 fused_ordering(865) 00:14:06.142 fused_ordering(866) 00:14:06.142 fused_ordering(867) 00:14:06.142 fused_ordering(868) 00:14:06.142 fused_ordering(869) 00:14:06.142 fused_ordering(870) 00:14:06.142 fused_ordering(871) 00:14:06.142 fused_ordering(872) 00:14:06.142 fused_ordering(873) 00:14:06.142 fused_ordering(874) 00:14:06.142 fused_ordering(875) 00:14:06.142 fused_ordering(876) 00:14:06.142 fused_ordering(877) 00:14:06.142 fused_ordering(878) 00:14:06.142 fused_ordering(879) 00:14:06.142 fused_ordering(880) 00:14:06.142 fused_ordering(881) 00:14:06.142 fused_ordering(882) 00:14:06.142 fused_ordering(883) 00:14:06.142 fused_ordering(884) 00:14:06.142 fused_ordering(885) 00:14:06.142 fused_ordering(886) 00:14:06.142 fused_ordering(887) 00:14:06.142 fused_ordering(888) 00:14:06.142 fused_ordering(889) 00:14:06.142 fused_ordering(890) 00:14:06.142 fused_ordering(891) 00:14:06.142 fused_ordering(892) 00:14:06.142 fused_ordering(893) 00:14:06.142 fused_ordering(894) 00:14:06.142 fused_ordering(895) 00:14:06.142 fused_ordering(896) 00:14:06.142 fused_ordering(897) 00:14:06.142 fused_ordering(898) 00:14:06.142 fused_ordering(899) 00:14:06.142 fused_ordering(900) 00:14:06.142 fused_ordering(901) 00:14:06.142 fused_ordering(902) 00:14:06.142 fused_ordering(903) 00:14:06.142 fused_ordering(904) 00:14:06.142 fused_ordering(905) 00:14:06.142 fused_ordering(906) 00:14:06.142 fused_ordering(907) 00:14:06.142 fused_ordering(908) 00:14:06.142 fused_ordering(909) 00:14:06.142 fused_ordering(910) 00:14:06.142 fused_ordering(911) 00:14:06.142 fused_ordering(912) 00:14:06.142 fused_ordering(913) 00:14:06.142 fused_ordering(914) 00:14:06.142 fused_ordering(915) 00:14:06.142 fused_ordering(916) 00:14:06.142 fused_ordering(917) 00:14:06.142 fused_ordering(918) 00:14:06.142 fused_ordering(919) 00:14:06.142 fused_ordering(920) 00:14:06.142 fused_ordering(921) 00:14:06.142 fused_ordering(922) 00:14:06.142 fused_ordering(923) 00:14:06.142 fused_ordering(924) 00:14:06.142 fused_ordering(925) 00:14:06.142 fused_ordering(926) 00:14:06.142 fused_ordering(927) 00:14:06.142 fused_ordering(928) 00:14:06.142 fused_ordering(929) 00:14:06.142 fused_ordering(930) 00:14:06.142 fused_ordering(931) 00:14:06.142 fused_ordering(932) 00:14:06.142 fused_ordering(933) 00:14:06.142 fused_ordering(934) 00:14:06.142 fused_ordering(935) 00:14:06.142 fused_ordering(936) 00:14:06.142 fused_ordering(937) 00:14:06.142 fused_ordering(938) 00:14:06.142 fused_ordering(939) 00:14:06.142 fused_ordering(940) 00:14:06.142 fused_ordering(941) 00:14:06.142 fused_ordering(942) 00:14:06.142 fused_ordering(943) 00:14:06.142 fused_ordering(944) 00:14:06.142 fused_ordering(945) 00:14:06.142 fused_ordering(946) 00:14:06.142 fused_ordering(947) 00:14:06.142 fused_ordering(948) 00:14:06.142 fused_ordering(949) 00:14:06.142 fused_ordering(950) 00:14:06.142 fused_ordering(951) 00:14:06.142 fused_ordering(952) 00:14:06.142 fused_ordering(953) 00:14:06.142 fused_ordering(954) 00:14:06.142 fused_ordering(955) 00:14:06.142 fused_ordering(956) 00:14:06.142 fused_ordering(957) 00:14:06.142 fused_ordering(958) 00:14:06.142 fused_ordering(959) 00:14:06.142 fused_ordering(960) 00:14:06.142 fused_ordering(961) 00:14:06.142 fused_ordering(962) 00:14:06.142 fused_ordering(963) 00:14:06.142 fused_ordering(964) 00:14:06.142 fused_ordering(965) 00:14:06.142 fused_ordering(966) 00:14:06.142 fused_ordering(967) 00:14:06.142 fused_ordering(968) 00:14:06.142 fused_ordering(969) 00:14:06.142 fused_ordering(970) 00:14:06.142 fused_ordering(971) 00:14:06.142 fused_ordering(972) 00:14:06.142 fused_ordering(973) 00:14:06.142 fused_ordering(974) 00:14:06.142 fused_ordering(975) 00:14:06.142 fused_ordering(976) 00:14:06.142 fused_ordering(977) 00:14:06.142 fused_ordering(978) 00:14:06.142 fused_ordering(979) 00:14:06.142 fused_ordering(980) 00:14:06.142 fused_ordering(981) 00:14:06.142 fused_ordering(982) 00:14:06.142 fused_ordering(983) 00:14:06.142 fused_ordering(984) 00:14:06.142 fused_ordering(985) 00:14:06.142 fused_ordering(986) 00:14:06.142 fused_ordering(987) 00:14:06.142 fused_ordering(988) 00:14:06.142 fused_ordering(989) 00:14:06.142 fused_ordering(990) 00:14:06.142 fused_ordering(991) 00:14:06.142 fused_ordering(992) 00:14:06.142 fused_ordering(993) 00:14:06.142 fused_ordering(994) 00:14:06.142 fused_ordering(995) 00:14:06.142 fused_ordering(996) 00:14:06.142 fused_ordering(997) 00:14:06.142 fused_ordering(998) 00:14:06.142 fused_ordering(999) 00:14:06.142 fused_ordering(1000) 00:14:06.142 fused_ordering(1001) 00:14:06.142 fused_ordering(1002) 00:14:06.142 fused_ordering(1003) 00:14:06.142 fused_ordering(1004) 00:14:06.142 fused_ordering(1005) 00:14:06.142 fused_ordering(1006) 00:14:06.142 fused_ordering(1007) 00:14:06.142 fused_ordering(1008) 00:14:06.142 fused_ordering(1009) 00:14:06.142 fused_ordering(1010) 00:14:06.142 fused_ordering(1011) 00:14:06.142 fused_ordering(1012) 00:14:06.142 fused_ordering(1013) 00:14:06.142 fused_ordering(1014) 00:14:06.142 fused_ordering(1015) 00:14:06.142 fused_ordering(1016) 00:14:06.142 fused_ordering(1017) 00:14:06.142 fused_ordering(1018) 00:14:06.142 fused_ordering(1019) 00:14:06.142 fused_ordering(1020) 00:14:06.142 fused_ordering(1021) 00:14:06.142 fused_ordering(1022) 00:14:06.142 fused_ordering(1023) 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:06.142 rmmod nvme_tcp 00:14:06.142 rmmod nvme_fabrics 00:14:06.142 rmmod nvme_keyring 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2344882 ']' 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2344882 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 2344882 ']' 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 2344882 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2344882 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2344882' 00:14:06.142 killing process with pid 2344882 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 2344882 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 2344882 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.142 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.689 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:08.689 00:14:08.689 real 0m13.664s 00:14:08.689 user 0m7.161s 00:14:08.689 sys 0m7.409s 00:14:08.689 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:08.689 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.689 ************************************ 00:14:08.689 END TEST nvmf_fused_ordering 00:14:08.689 ************************************ 00:14:08.689 13:55:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:08.690 ************************************ 00:14:08.690 START TEST nvmf_ns_masking 00:14:08.690 ************************************ 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:08.690 * Looking for test storage... 00:14:08.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:08.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.690 --rc genhtml_branch_coverage=1 00:14:08.690 --rc genhtml_function_coverage=1 00:14:08.690 --rc genhtml_legend=1 00:14:08.690 --rc geninfo_all_blocks=1 00:14:08.690 --rc geninfo_unexecuted_blocks=1 00:14:08.690 00:14:08.690 ' 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:08.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.690 --rc genhtml_branch_coverage=1 00:14:08.690 --rc genhtml_function_coverage=1 00:14:08.690 --rc genhtml_legend=1 00:14:08.690 --rc geninfo_all_blocks=1 00:14:08.690 --rc geninfo_unexecuted_blocks=1 00:14:08.690 00:14:08.690 ' 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:08.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.690 --rc genhtml_branch_coverage=1 00:14:08.690 --rc genhtml_function_coverage=1 00:14:08.690 --rc genhtml_legend=1 00:14:08.690 --rc geninfo_all_blocks=1 00:14:08.690 --rc geninfo_unexecuted_blocks=1 00:14:08.690 00:14:08.690 ' 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:08.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.690 --rc genhtml_branch_coverage=1 00:14:08.690 --rc genhtml_function_coverage=1 00:14:08.690 --rc genhtml_legend=1 00:14:08.690 --rc geninfo_all_blocks=1 00:14:08.690 --rc geninfo_unexecuted_blocks=1 00:14:08.690 00:14:08.690 ' 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.690 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:08.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=81cd9ede-7e1e-403e-a279-d0f8ea110aee 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=baf922f5-04a9-457f-89fb-ce8194a994fb 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=64da9aa5-d002-4a19-af2e-bc412b0679e1 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:08.691 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:16.831 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:16.832 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:16.832 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:16.832 Found net devices under 0000:31:00.0: cvl_0_0 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:16.832 Found net devices under 0000:31:00.1: cvl_0_1 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.832 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:16.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:14:16.832 00:14:16.832 --- 10.0.0.2 ping statistics --- 00:14:16.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.832 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:14:16.832 00:14:16.832 --- 10.0.0.1 ping statistics --- 00:14:16.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.832 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2349892 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2349892 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 2349892 ']' 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.832 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:16.833 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.833 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:16.833 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:16.833 [2024-11-06 13:56:02.413038] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:14:16.833 [2024-11-06 13:56:02.413103] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.833 [2024-11-06 13:56:02.514933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.833 [2024-11-06 13:56:02.566107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.833 [2024-11-06 13:56:02.566152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.833 [2024-11-06 13:56:02.566160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.833 [2024-11-06 13:56:02.566167] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.833 [2024-11-06 13:56:02.566173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.833 [2024-11-06 13:56:02.566954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.094 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:17.094 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:17.094 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:17.094 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:17.094 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:17.094 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.094 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:17.356 [2024-11-06 13:56:03.423063] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.356 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:17.356 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:17.356 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:17.617 Malloc1 00:14:17.617 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:17.617 Malloc2 00:14:17.617 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:17.878 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:18.139 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.400 [2024-11-06 13:56:04.448546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.400 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:18.400 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 64da9aa5-d002-4a19-af2e-bc412b0679e1 -a 10.0.0.2 -s 4420 -i 4 00:14:18.660 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:18.660 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:18.660 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:18.660 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:18.660 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:20.574 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:20.574 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:20.574 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:20.574 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:20.574 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:20.574 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:20.574 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:20.574 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:20.574 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:20.574 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:20.574 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:20.574 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.574 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.574 [ 0]:0x1 00:14:20.574 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.574 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.574 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da53bfa941e44180a30c1625b16b7b52 00:14:20.574 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da53bfa941e44180a30c1625b16b7b52 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.574 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:20.834 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:20.834 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.834 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.834 [ 0]:0x1 00:14:20.834 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.834 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.834 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da53bfa941e44180a30c1625b16b7b52 00:14:20.834 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da53bfa941e44180a30c1625b16b7b52 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.834 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:20.834 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.834 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:20.834 [ 1]:0x2 00:14:20.834 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:20.834 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.095 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8ada1cf796be48c1a6ab550dfc2e148e 00:14:21.095 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8ada1cf796be48c1a6ab550dfc2e148e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.095 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:21.095 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.410 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.410 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:21.711 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:21.711 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 64da9aa5-d002-4a19-af2e-bc412b0679e1 -a 10.0.0.2 -s 4420 -i 4 00:14:21.711 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:21.711 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:21.711 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:21.711 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:14:21.711 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:14:21.711 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:24.251 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:24.251 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:24.251 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:24.251 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:24.251 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:24.251 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:24.251 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:24.251 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:24.251 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:24.251 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:24.251 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:24.251 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:24.251 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:24.251 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:24.251 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:24.251 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.252 [ 0]:0x2 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8ada1cf796be48c1a6ab550dfc2e148e 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8ada1cf796be48c1a6ab550dfc2e148e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:24.252 [ 0]:0x1 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da53bfa941e44180a30c1625b16b7b52 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da53bfa941e44180a30c1625b16b7b52 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.252 [ 1]:0x2 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:24.252 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.512 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8ada1cf796be48c1a6ab550dfc2e148e 00:14:24.512 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8ada1cf796be48c1a6ab550dfc2e148e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.512 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:24.512 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:24.512 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:24.512 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:24.512 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:24.512 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:24.512 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:24.512 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:24.512 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:24.512 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.512 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:24.512 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:24.512 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.773 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:24.773 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.773 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:24.773 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:24.773 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:24.773 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:24.773 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:24.773 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.773 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:24.773 [ 0]:0x2 00:14:24.773 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:24.773 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.773 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8ada1cf796be48c1a6ab550dfc2e148e 00:14:24.773 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8ada1cf796be48c1a6ab550dfc2e148e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.773 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:24.773 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:24.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.773 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:25.033 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:25.033 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 64da9aa5-d002-4a19-af2e-bc412b0679e1 -a 10.0.0.2 -s 4420 -i 4 00:14:25.303 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:25.303 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:25.303 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:25.303 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:25.303 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:25.303 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:27.215 [ 0]:0x1 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da53bfa941e44180a30c1625b16b7b52 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da53bfa941e44180a30c1625b16b7b52 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:27.215 [ 1]:0x2 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:27.215 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.476 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8ada1cf796be48c1a6ab550dfc2e148e 00:14:27.476 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8ada1cf796be48c1a6ab550dfc2e148e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.476 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:27.476 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:27.476 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:27.476 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:27.476 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:27.476 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.476 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:27.476 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.476 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:27.476 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.476 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:27.476 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:27.476 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.736 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:27.736 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.736 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:27.736 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:27.736 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:27.736 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:27.737 [ 0]:0x2 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8ada1cf796be48c1a6ab550dfc2e148e 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8ada1cf796be48c1a6ab550dfc2e148e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:27.737 [2024-11-06 13:56:13.974566] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:27.737 request: 00:14:27.737 { 00:14:27.737 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.737 "nsid": 2, 00:14:27.737 "host": "nqn.2016-06.io.spdk:host1", 00:14:27.737 "method": "nvmf_ns_remove_host", 00:14:27.737 "req_id": 1 00:14:27.737 } 00:14:27.737 Got JSON-RPC error response 00:14:27.737 response: 00:14:27.737 { 00:14:27.737 "code": -32602, 00:14:27.737 "message": "Invalid parameters" 00:14:27.737 } 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.737 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:27.737 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:27.737 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:27.998 [ 0]:0x2 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8ada1cf796be48c1a6ab550dfc2e148e 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8ada1cf796be48c1a6ab550dfc2e148e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:27.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2352142 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2352142 /var/tmp/host.sock 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 2352142 ']' 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:27.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:27.998 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:27.998 [2024-11-06 13:56:14.219820] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:14:27.998 [2024-11-06 13:56:14.219872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352142 ] 00:14:28.260 [2024-11-06 13:56:14.309027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.260 [2024-11-06 13:56:14.345099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.831 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:28.831 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:28.831 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.091 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:29.352 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 81cd9ede-7e1e-403e-a279-d0f8ea110aee 00:14:29.352 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:29.352 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 81CD9EDE7E1E403EA279D0F8EA110AEE -i 00:14:29.352 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid baf922f5-04a9-457f-89fb-ce8194a994fb 00:14:29.352 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:29.352 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g BAF922F504A9457F89FBCE8194A994FB -i 00:14:29.612 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:29.874 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:29.874 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:29.874 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:30.134 nvme0n1 00:14:30.134 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:30.134 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:30.395 nvme1n2 00:14:30.395 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:30.395 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:30.395 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:30.395 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:30.395 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:30.655 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:30.655 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:30.655 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:30.655 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:30.915 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 81cd9ede-7e1e-403e-a279-d0f8ea110aee == \8\1\c\d\9\e\d\e\-\7\e\1\e\-\4\0\3\e\-\a\2\7\9\-\d\0\f\8\e\a\1\1\0\a\e\e ]] 00:14:30.915 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:30.915 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:30.915 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:30.915 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ baf922f5-04a9-457f-89fb-ce8194a994fb == \b\a\f\9\2\2\f\5\-\0\4\a\9\-\4\5\7\f\-\8\9\f\b\-\c\e\8\1\9\4\a\9\9\4\f\b ]] 00:14:30.915 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.175 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:31.435 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 81cd9ede-7e1e-403e-a279-d0f8ea110aee 00:14:31.435 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:31.435 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 81CD9EDE7E1E403EA279D0F8EA110AEE 00:14:31.435 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:31.435 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 81CD9EDE7E1E403EA279D0F8EA110AEE 00:14:31.435 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.435 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:31.435 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.435 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:31.435 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.435 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:31.435 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.436 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:31.436 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 81CD9EDE7E1E403EA279D0F8EA110AEE 00:14:31.436 [2024-11-06 13:56:17.652233] bdev.c:8469:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:31.436 [2024-11-06 13:56:17.652260] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:31.436 [2024-11-06 13:56:17.652266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.436 request: 00:14:31.436 { 00:14:31.436 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.436 "namespace": { 00:14:31.436 "bdev_name": "invalid", 00:14:31.436 "nsid": 1, 00:14:31.436 "nguid": "81CD9EDE7E1E403EA279D0F8EA110AEE", 00:14:31.436 "no_auto_visible": false 00:14:31.436 }, 00:14:31.436 "method": "nvmf_subsystem_add_ns", 00:14:31.436 "req_id": 1 00:14:31.436 } 00:14:31.436 Got JSON-RPC error response 00:14:31.436 response: 00:14:31.436 { 00:14:31.436 "code": -32602, 00:14:31.436 "message": "Invalid parameters" 00:14:31.436 } 00:14:31.436 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:31.436 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:31.436 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:31.436 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:31.436 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 81cd9ede-7e1e-403e-a279-d0f8ea110aee 00:14:31.436 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:31.436 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 81CD9EDE7E1E403EA279D0F8EA110AEE -i 00:14:31.696 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:33.607 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:33.607 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:33.607 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:33.867 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:33.867 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2352142 00:14:33.867 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 2352142 ']' 00:14:33.867 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 2352142 00:14:33.867 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:33.867 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:33.867 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2352142 00:14:33.867 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:33.867 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:33.867 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2352142' 00:14:33.867 killing process with pid 2352142 00:14:33.867 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 2352142 00:14:33.867 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 2352142 00:14:34.127 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:34.388 rmmod nvme_tcp 00:14:34.388 rmmod nvme_fabrics 00:14:34.388 rmmod nvme_keyring 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2349892 ']' 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2349892 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 2349892 ']' 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 2349892 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2349892 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2349892' 00:14:34.388 killing process with pid 2349892 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 2349892 00:14:34.388 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 2349892 00:14:34.648 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:34.648 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:34.648 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:34.649 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:34.649 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:34.649 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:34.649 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:34.649 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:34.649 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:34.649 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.649 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.649 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.562 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:36.562 00:14:36.562 real 0m28.332s 00:14:36.562 user 0m31.986s 00:14:36.562 sys 0m8.294s 00:14:36.562 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:36.562 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:36.562 ************************************ 00:14:36.562 END TEST nvmf_ns_masking 00:14:36.562 ************************************ 00:14:36.824 13:56:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:36.824 13:56:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:36.824 13:56:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:36.824 13:56:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:36.824 13:56:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:36.824 ************************************ 00:14:36.824 START TEST nvmf_nvme_cli 00:14:36.824 ************************************ 00:14:36.824 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:36.824 * Looking for test storage... 00:14:36.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:36.824 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:36.825 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:36.825 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:36.825 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:36.825 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:36.825 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:36.825 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:36.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.825 --rc genhtml_branch_coverage=1 00:14:36.825 --rc genhtml_function_coverage=1 00:14:36.825 --rc genhtml_legend=1 00:14:36.825 --rc geninfo_all_blocks=1 00:14:36.825 --rc geninfo_unexecuted_blocks=1 00:14:36.825 00:14:36.825 ' 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:37.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.087 --rc genhtml_branch_coverage=1 00:14:37.087 --rc genhtml_function_coverage=1 00:14:37.087 --rc genhtml_legend=1 00:14:37.087 --rc geninfo_all_blocks=1 00:14:37.087 --rc geninfo_unexecuted_blocks=1 00:14:37.087 00:14:37.087 ' 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:37.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.087 --rc genhtml_branch_coverage=1 00:14:37.087 --rc genhtml_function_coverage=1 00:14:37.087 --rc genhtml_legend=1 00:14:37.087 --rc geninfo_all_blocks=1 00:14:37.087 --rc geninfo_unexecuted_blocks=1 00:14:37.087 00:14:37.087 ' 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:37.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.087 --rc genhtml_branch_coverage=1 00:14:37.087 --rc genhtml_function_coverage=1 00:14:37.087 --rc genhtml_legend=1 00:14:37.087 --rc geninfo_all_blocks=1 00:14:37.087 --rc geninfo_unexecuted_blocks=1 00:14:37.087 00:14:37.087 ' 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:37.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:37.087 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:45.230 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:45.230 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:45.230 Found net devices under 0000:31:00.0: cvl_0_0 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:45.230 Found net devices under 0000:31:00.1: cvl_0_1 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:45.230 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:45.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:14:45.231 00:14:45.231 --- 10.0.0.2 ping statistics --- 00:14:45.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.231 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:45.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:14:45.231 00:14:45.231 --- 10.0.0.1 ping statistics --- 00:14:45.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.231 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2357814 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2357814 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 2357814 ']' 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:45.231 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.231 [2024-11-06 13:56:30.830900] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:14:45.231 [2024-11-06 13:56:30.830967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.231 [2024-11-06 13:56:30.931191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.231 [2024-11-06 13:56:30.985828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.231 [2024-11-06 13:56:30.985881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.231 [2024-11-06 13:56:30.985893] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.231 [2024-11-06 13:56:30.985900] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.231 [2024-11-06 13:56:30.985906] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.231 [2024-11-06 13:56:30.988307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.231 [2024-11-06 13:56:30.988444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.231 [2024-11-06 13:56:30.988603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.231 [2024-11-06 13:56:30.988605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.493 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:45.493 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:14:45.493 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:45.493 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:45.493 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.493 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.493 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:45.493 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.493 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.493 [2024-11-06 13:56:31.707152] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.493 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.493 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:45.494 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.494 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.494 Malloc0 00:14:45.494 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.494 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:45.494 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.494 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.754 Malloc1 00:14:45.754 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.754 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:45.754 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.754 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.754 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.754 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:45.754 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.754 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.754 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.755 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:45.755 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.755 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.755 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.755 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.755 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.755 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.755 [2024-11-06 13:56:31.817079] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.755 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.755 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:45.755 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.755 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.755 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.755 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 4420 00:14:45.755 00:14:45.755 Discovery Log Number of Records 2, Generation counter 2 00:14:45.755 =====Discovery Log Entry 0====== 00:14:45.755 trtype: tcp 00:14:45.755 adrfam: ipv4 00:14:45.755 subtype: current discovery subsystem 00:14:45.755 treq: not required 00:14:45.755 portid: 0 00:14:45.755 trsvcid: 4420 00:14:45.755 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:45.755 traddr: 10.0.0.2 00:14:45.755 eflags: explicit discovery connections, duplicate discovery information 00:14:45.755 sectype: none 00:14:45.755 =====Discovery Log Entry 1====== 00:14:45.755 trtype: tcp 00:14:45.755 adrfam: ipv4 00:14:45.755 subtype: nvme subsystem 00:14:45.755 treq: not required 00:14:45.755 portid: 0 00:14:45.755 trsvcid: 4420 00:14:45.755 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:45.755 traddr: 10.0.0.2 00:14:45.755 eflags: none 00:14:45.755 sectype: none 00:14:45.755 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:45.755 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:45.755 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:45.755 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.755 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:45.755 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:45.755 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.755 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:45.755 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:46.016 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:46.016 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:47.402 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:47.402 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:14:47.402 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:47.402 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:47.402 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:47.402 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:14:49.345 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:49.345 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:49.345 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:49.345 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:49.345 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:49.345 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:14:49.345 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:49.345 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:49.345 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:49.345 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:49.605 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:49.605 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:49.605 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:49.605 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:49.605 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:49.605 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:49.605 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:49.605 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:49.605 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:49.605 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:49.605 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:49.605 /dev/nvme0n2 ]] 00:14:49.605 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:49.605 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:49.605 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:49.605 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:49.605 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:49.865 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:49.865 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:49.865 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:49.865 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:49.865 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:49.865 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:49.865 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:49.865 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:49.865 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:49.865 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:49.865 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:49.865 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:50.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:50.126 rmmod nvme_tcp 00:14:50.126 rmmod nvme_fabrics 00:14:50.126 rmmod nvme_keyring 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2357814 ']' 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2357814 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 2357814 ']' 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 2357814 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2357814 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2357814' 00:14:50.126 killing process with pid 2357814 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 2357814 00:14:50.126 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 2357814 00:14:50.387 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:50.387 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:50.387 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:50.387 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:50.387 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:50.387 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:50.387 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:50.387 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:50.387 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:50.388 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.388 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.388 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.300 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:52.300 00:14:52.300 real 0m15.676s 00:14:52.300 user 0m24.227s 00:14:52.300 sys 0m6.495s 00:14:52.561 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:52.561 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.561 ************************************ 00:14:52.561 END TEST nvmf_nvme_cli 00:14:52.561 ************************************ 00:14:52.561 13:56:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:52.561 13:56:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:52.561 13:56:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:52.561 13:56:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:52.561 13:56:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:52.561 ************************************ 00:14:52.561 START TEST nvmf_vfio_user 00:14:52.561 ************************************ 00:14:52.561 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:52.561 * Looking for test storage... 00:14:52.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.561 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:52.561 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:52.561 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:52.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.823 --rc genhtml_branch_coverage=1 00:14:52.823 --rc genhtml_function_coverage=1 00:14:52.823 --rc genhtml_legend=1 00:14:52.823 --rc geninfo_all_blocks=1 00:14:52.823 --rc geninfo_unexecuted_blocks=1 00:14:52.823 00:14:52.823 ' 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:52.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.823 --rc genhtml_branch_coverage=1 00:14:52.823 --rc genhtml_function_coverage=1 00:14:52.823 --rc genhtml_legend=1 00:14:52.823 --rc geninfo_all_blocks=1 00:14:52.823 --rc geninfo_unexecuted_blocks=1 00:14:52.823 00:14:52.823 ' 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:52.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.823 --rc genhtml_branch_coverage=1 00:14:52.823 --rc genhtml_function_coverage=1 00:14:52.823 --rc genhtml_legend=1 00:14:52.823 --rc geninfo_all_blocks=1 00:14:52.823 --rc geninfo_unexecuted_blocks=1 00:14:52.823 00:14:52.823 ' 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:52.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.823 --rc genhtml_branch_coverage=1 00:14:52.823 --rc genhtml_function_coverage=1 00:14:52.823 --rc genhtml_legend=1 00:14:52.823 --rc geninfo_all_blocks=1 00:14:52.823 --rc geninfo_unexecuted_blocks=1 00:14:52.823 00:14:52.823 ' 00:14:52.823 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:52.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2359376 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2359376' 00:14:52.824 Process pid: 2359376 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2359376 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 2359376 ']' 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:52.824 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:52.824 [2024-11-06 13:56:38.954383] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:14:52.824 [2024-11-06 13:56:38.954438] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.824 [2024-11-06 13:56:39.037762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:52.824 [2024-11-06 13:56:39.069708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.824 [2024-11-06 13:56:39.069738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.824 [2024-11-06 13:56:39.069743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.824 [2024-11-06 13:56:39.069753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.824 [2024-11-06 13:56:39.069757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.824 [2024-11-06 13:56:39.071045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.824 [2024-11-06 13:56:39.071195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.824 [2024-11-06 13:56:39.071315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.824 [2024-11-06 13:56:39.071317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.765 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:53.765 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:14:53.765 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:54.706 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:54.706 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:54.706 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:54.706 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:54.706 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:54.706 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:54.967 Malloc1 00:14:54.967 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:55.227 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:55.487 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:55.487 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:55.487 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:55.487 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:55.751 Malloc2 00:14:55.751 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:56.013 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:56.013 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:56.276 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:56.276 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:56.276 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:56.276 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:56.276 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:56.276 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:56.276 [2024-11-06 13:56:42.448309] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:14:56.276 [2024-11-06 13:56:42.448330] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360067 ] 00:14:56.276 [2024-11-06 13:56:42.485051] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:56.276 [2024-11-06 13:56:42.490307] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:56.276 [2024-11-06 13:56:42.490324] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f348ce4d000 00:14:56.276 [2024-11-06 13:56:42.491311] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.276 [2024-11-06 13:56:42.492319] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.276 [2024-11-06 13:56:42.493320] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.276 [2024-11-06 13:56:42.494334] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:56.276 [2024-11-06 13:56:42.495337] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:56.276 [2024-11-06 13:56:42.496340] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.276 [2024-11-06 13:56:42.497339] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:56.276 [2024-11-06 13:56:42.498348] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.276 [2024-11-06 13:56:42.499359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:56.276 [2024-11-06 13:56:42.499367] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f348ce42000 00:14:56.276 [2024-11-06 13:56:42.500278] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:56.276 [2024-11-06 13:56:42.509731] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:56.276 [2024-11-06 13:56:42.509759] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:56.276 [2024-11-06 13:56:42.515451] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:56.276 [2024-11-06 13:56:42.515484] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:56.276 [2024-11-06 13:56:42.515543] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:56.276 [2024-11-06 13:56:42.515560] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:56.276 [2024-11-06 13:56:42.515564] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:56.277 [2024-11-06 13:56:42.516454] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:56.277 [2024-11-06 13:56:42.516461] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:56.277 [2024-11-06 13:56:42.516467] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:56.277 [2024-11-06 13:56:42.517461] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:56.277 [2024-11-06 13:56:42.517467] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:56.277 [2024-11-06 13:56:42.517473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:56.277 [2024-11-06 13:56:42.518470] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:56.277 [2024-11-06 13:56:42.518477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:56.277 [2024-11-06 13:56:42.519474] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:56.277 [2024-11-06 13:56:42.519481] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:56.277 [2024-11-06 13:56:42.519484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:56.277 [2024-11-06 13:56:42.519489] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:56.277 [2024-11-06 13:56:42.519596] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:56.277 [2024-11-06 13:56:42.519600] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:56.277 [2024-11-06 13:56:42.519605] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:56.277 [2024-11-06 13:56:42.520491] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:56.277 [2024-11-06 13:56:42.521488] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:56.277 [2024-11-06 13:56:42.522500] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:56.277 [2024-11-06 13:56:42.523500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:56.277 [2024-11-06 13:56:42.523566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:56.277 [2024-11-06 13:56:42.524513] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:56.277 [2024-11-06 13:56:42.524519] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:56.277 [2024-11-06 13:56:42.524524] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:56.277 [2024-11-06 13:56:42.524539] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:56.277 [2024-11-06 13:56:42.524548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:56.277 [2024-11-06 13:56:42.524561] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:56.277 [2024-11-06 13:56:42.524566] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:56.277 [2024-11-06 13:56:42.524569] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.277 [2024-11-06 13:56:42.524581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:56.277 [2024-11-06 13:56:42.524611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:56.277 [2024-11-06 13:56:42.524619] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:56.277 [2024-11-06 13:56:42.524623] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:56.277 [2024-11-06 13:56:42.524626] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:56.277 [2024-11-06 13:56:42.524630] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:56.277 [2024-11-06 13:56:42.524637] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:56.277 [2024-11-06 13:56:42.524640] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:56.277 [2024-11-06 13:56:42.524644] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:56.277 [2024-11-06 13:56:42.524652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:56.277 [2024-11-06 13:56:42.524659] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:56.277 [2024-11-06 13:56:42.524669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:56.277 [2024-11-06 13:56:42.524678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.277 [2024-11-06 13:56:42.524684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.277 [2024-11-06 13:56:42.524690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.277 [2024-11-06 13:56:42.524696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.277 [2024-11-06 13:56:42.524699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:56.277 [2024-11-06 13:56:42.524705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:56.277 [2024-11-06 13:56:42.524711] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:56.277 [2024-11-06 13:56:42.524721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:56.277 [2024-11-06 13:56:42.524728] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:56.277 [2024-11-06 13:56:42.524731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:56.277 [2024-11-06 13:56:42.524736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:56.277 [2024-11-06 13:56:42.524741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:56.277 [2024-11-06 13:56:42.524751] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:56.277 [2024-11-06 13:56:42.524758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:56.277 [2024-11-06 13:56:42.524801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:56.277 [2024-11-06 13:56:42.524808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:56.277 [2024-11-06 13:56:42.524814] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:56.277 [2024-11-06 13:56:42.524817] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:56.277 [2024-11-06 13:56:42.524819] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.277 [2024-11-06 13:56:42.524824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:56.277 [2024-11-06 13:56:42.524838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:56.277 [2024-11-06 13:56:42.524845] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:56.277 [2024-11-06 13:56:42.524852] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:56.277 [2024-11-06 13:56:42.524858] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:56.277 [2024-11-06 13:56:42.524863] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:56.277 [2024-11-06 13:56:42.524866] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:56.277 [2024-11-06 13:56:42.524868] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.277 [2024-11-06 13:56:42.524872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:56.277 [2024-11-06 13:56:42.524887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:56.277 [2024-11-06 13:56:42.524897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:56.278 [2024-11-06 13:56:42.524902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:56.278 [2024-11-06 13:56:42.524907] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:56.278 [2024-11-06 13:56:42.524910] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:56.278 [2024-11-06 13:56:42.524912] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.278 [2024-11-06 13:56:42.524918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:56.278 [2024-11-06 13:56:42.524926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:56.278 [2024-11-06 13:56:42.524933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:56.278 [2024-11-06 13:56:42.524938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:56.278 [2024-11-06 13:56:42.524944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:56.278 [2024-11-06 13:56:42.524948] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:56.278 [2024-11-06 13:56:42.524952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:56.278 [2024-11-06 13:56:42.524956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:56.278 [2024-11-06 13:56:42.524959] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:56.278 [2024-11-06 13:56:42.524962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:56.278 [2024-11-06 13:56:42.524966] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:56.278 [2024-11-06 13:56:42.524980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:56.278 [2024-11-06 13:56:42.524991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:56.278 [2024-11-06 13:56:42.525000] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:56.278 [2024-11-06 13:56:42.525007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:56.278 [2024-11-06 13:56:42.525015] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:56.278 [2024-11-06 13:56:42.525025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:56.278 [2024-11-06 13:56:42.525033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:56.278 [2024-11-06 13:56:42.525041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:56.278 [2024-11-06 13:56:42.525052] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:56.278 [2024-11-06 13:56:42.525055] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:56.278 [2024-11-06 13:56:42.525058] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:56.278 [2024-11-06 13:56:42.525060] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:56.278 [2024-11-06 13:56:42.525063] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:56.278 [2024-11-06 13:56:42.525067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:56.278 [2024-11-06 13:56:42.525073] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:56.278 [2024-11-06 13:56:42.525077] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:56.278 [2024-11-06 13:56:42.525079] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.278 [2024-11-06 13:56:42.525083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:56.278 [2024-11-06 13:56:42.525089] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:56.278 [2024-11-06 13:56:42.525092] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:56.278 [2024-11-06 13:56:42.525094] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.278 [2024-11-06 13:56:42.525098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:56.278 [2024-11-06 13:56:42.525104] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:56.278 [2024-11-06 13:56:42.525107] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:56.278 [2024-11-06 13:56:42.525109] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.278 [2024-11-06 13:56:42.525113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:56.278 [2024-11-06 13:56:42.525118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:56.278 [2024-11-06 13:56:42.525127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:56.278 [2024-11-06 13:56:42.525136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:56.278 [2024-11-06 13:56:42.525141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:56.278 ===================================================== 00:14:56.278 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:56.278 ===================================================== 00:14:56.278 Controller Capabilities/Features 00:14:56.278 ================================ 00:14:56.278 Vendor ID: 4e58 00:14:56.278 Subsystem Vendor ID: 4e58 00:14:56.278 Serial Number: SPDK1 00:14:56.278 Model Number: SPDK bdev Controller 00:14:56.278 Firmware Version: 25.01 00:14:56.278 Recommended Arb Burst: 6 00:14:56.278 IEEE OUI Identifier: 8d 6b 50 00:14:56.278 Multi-path I/O 00:14:56.278 May have multiple subsystem ports: Yes 00:14:56.278 May have multiple controllers: Yes 00:14:56.278 Associated with SR-IOV VF: No 00:14:56.278 Max Data Transfer Size: 131072 00:14:56.278 Max Number of Namespaces: 32 00:14:56.278 Max Number of I/O Queues: 127 00:14:56.278 NVMe Specification Version (VS): 1.3 00:14:56.278 NVMe Specification Version (Identify): 1.3 00:14:56.278 Maximum Queue Entries: 256 00:14:56.278 Contiguous Queues Required: Yes 00:14:56.278 Arbitration Mechanisms Supported 00:14:56.278 Weighted Round Robin: Not Supported 00:14:56.278 Vendor Specific: Not Supported 00:14:56.278 Reset Timeout: 15000 ms 00:14:56.278 Doorbell Stride: 4 bytes 00:14:56.278 NVM Subsystem Reset: Not Supported 00:14:56.278 Command Sets Supported 00:14:56.294 NVM Command Set: Supported 00:14:56.294 Boot Partition: Not Supported 00:14:56.294 Memory Page Size Minimum: 4096 bytes 00:14:56.294 Memory Page Size Maximum: 4096 bytes 00:14:56.294 Persistent Memory Region: Not Supported 00:14:56.294 Optional Asynchronous Events Supported 00:14:56.294 Namespace Attribute Notices: Supported 00:14:56.294 Firmware Activation Notices: Not Supported 00:14:56.294 ANA Change Notices: Not Supported 00:14:56.294 PLE Aggregate Log Change Notices: Not Supported 00:14:56.294 LBA Status Info Alert Notices: Not Supported 00:14:56.294 EGE Aggregate Log Change Notices: Not Supported 00:14:56.294 Normal NVM Subsystem Shutdown event: Not Supported 00:14:56.294 Zone Descriptor Change Notices: Not Supported 00:14:56.294 Discovery Log Change Notices: Not Supported 00:14:56.294 Controller Attributes 00:14:56.294 128-bit Host Identifier: Supported 00:14:56.294 Non-Operational Permissive Mode: Not Supported 00:14:56.294 NVM Sets: Not Supported 00:14:56.294 Read Recovery Levels: Not Supported 00:14:56.294 Endurance Groups: Not Supported 00:14:56.294 Predictable Latency Mode: Not Supported 00:14:56.294 Traffic Based Keep ALive: Not Supported 00:14:56.294 Namespace Granularity: Not Supported 00:14:56.294 SQ Associations: Not Supported 00:14:56.294 UUID List: Not Supported 00:14:56.294 Multi-Domain Subsystem: Not Supported 00:14:56.294 Fixed Capacity Management: Not Supported 00:14:56.294 Variable Capacity Management: Not Supported 00:14:56.294 Delete Endurance Group: Not Supported 00:14:56.294 Delete NVM Set: Not Supported 00:14:56.294 Extended LBA Formats Supported: Not Supported 00:14:56.294 Flexible Data Placement Supported: Not Supported 00:14:56.294 00:14:56.294 Controller Memory Buffer Support 00:14:56.294 ================================ 00:14:56.294 Supported: No 00:14:56.294 00:14:56.294 Persistent Memory Region Support 00:14:56.294 ================================ 00:14:56.294 Supported: No 00:14:56.294 00:14:56.294 Admin Command Set Attributes 00:14:56.294 ============================ 00:14:56.294 Security Send/Receive: Not Supported 00:14:56.294 Format NVM: Not Supported 00:14:56.294 Firmware Activate/Download: Not Supported 00:14:56.294 Namespace Management: Not Supported 00:14:56.294 Device Self-Test: Not Supported 00:14:56.294 Directives: Not Supported 00:14:56.294 NVMe-MI: Not Supported 00:14:56.294 Virtualization Management: Not Supported 00:14:56.294 Doorbell Buffer Config: Not Supported 00:14:56.294 Get LBA Status Capability: Not Supported 00:14:56.294 Command & Feature Lockdown Capability: Not Supported 00:14:56.294 Abort Command Limit: 4 00:14:56.294 Async Event Request Limit: 4 00:14:56.294 Number of Firmware Slots: N/A 00:14:56.294 Firmware Slot 1 Read-Only: N/A 00:14:56.294 Firmware Activation Without Reset: N/A 00:14:56.294 Multiple Update Detection Support: N/A 00:14:56.294 Firmware Update Granularity: No Information Provided 00:14:56.294 Per-Namespace SMART Log: No 00:14:56.294 Asymmetric Namespace Access Log Page: Not Supported 00:14:56.294 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:56.294 Command Effects Log Page: Supported 00:14:56.294 Get Log Page Extended Data: Supported 00:14:56.294 Telemetry Log Pages: Not Supported 00:14:56.294 Persistent Event Log Pages: Not Supported 00:14:56.294 Supported Log Pages Log Page: May Support 00:14:56.294 Commands Supported & Effects Log Page: Not Supported 00:14:56.294 Feature Identifiers & Effects Log Page:May Support 00:14:56.294 NVMe-MI Commands & Effects Log Page: May Support 00:14:56.294 Data Area 4 for Telemetry Log: Not Supported 00:14:56.294 Error Log Page Entries Supported: 128 00:14:56.294 Keep Alive: Supported 00:14:56.294 Keep Alive Granularity: 10000 ms 00:14:56.294 00:14:56.294 NVM Command Set Attributes 00:14:56.294 ========================== 00:14:56.294 Submission Queue Entry Size 00:14:56.294 Max: 64 00:14:56.294 Min: 64 00:14:56.294 Completion Queue Entry Size 00:14:56.294 Max: 16 00:14:56.294 Min: 16 00:14:56.294 Number of Namespaces: 32 00:14:56.294 Compare Command: Supported 00:14:56.294 Write Uncorrectable Command: Not Supported 00:14:56.294 Dataset Management Command: Supported 00:14:56.294 Write Zeroes Command: Supported 00:14:56.294 Set Features Save Field: Not Supported 00:14:56.294 Reservations: Not Supported 00:14:56.294 Timestamp: Not Supported 00:14:56.294 Copy: Supported 00:14:56.294 Volatile Write Cache: Present 00:14:56.294 Atomic Write Unit (Normal): 1 00:14:56.294 Atomic Write Unit (PFail): 1 00:14:56.294 Atomic Compare & Write Unit: 1 00:14:56.294 Fused Compare & Write: Supported 00:14:56.294 Scatter-Gather List 00:14:56.294 SGL Command Set: Supported (Dword aligned) 00:14:56.294 SGL Keyed: Not Supported 00:14:56.294 SGL Bit Bucket Descriptor: Not Supported 00:14:56.294 SGL Metadata Pointer: Not Supported 00:14:56.294 Oversized SGL: Not Supported 00:14:56.294 SGL Metadata Address: Not Supported 00:14:56.294 SGL Offset: Not Supported 00:14:56.294 Transport SGL Data Block: Not Supported 00:14:56.294 Replay Protected Memory Block: Not Supported 00:14:56.294 00:14:56.294 Firmware Slot Information 00:14:56.294 ========================= 00:14:56.294 Active slot: 1 00:14:56.294 Slot 1 Firmware Revision: 25.01 00:14:56.294 00:14:56.294 00:14:56.294 Commands Supported and Effects 00:14:56.294 ============================== 00:14:56.294 Admin Commands 00:14:56.294 -------------- 00:14:56.294 Get Log Page (02h): Supported 00:14:56.294 Identify (06h): Supported 00:14:56.294 Abort (08h): Supported 00:14:56.294 Set Features (09h): Supported 00:14:56.294 Get Features (0Ah): Supported 00:14:56.294 Asynchronous Event Request (0Ch): Supported 00:14:56.294 Keep Alive (18h): Supported 00:14:56.294 I/O Commands 00:14:56.294 ------------ 00:14:56.295 Flush (00h): Supported LBA-Change 00:14:56.295 Write (01h): Supported LBA-Change 00:14:56.295 Read (02h): Supported 00:14:56.295 Compare (05h): Supported 00:14:56.295 Write Zeroes (08h): Supported LBA-Change 00:14:56.295 Dataset Management (09h): Supported LBA-Change 00:14:56.295 Copy (19h): Supported LBA-Change 00:14:56.295 00:14:56.295 Error Log 00:14:56.295 ========= 00:14:56.295 00:14:56.295 Arbitration 00:14:56.295 =========== 00:14:56.295 Arbitration Burst: 1 00:14:56.295 00:14:56.295 Power Management 00:14:56.295 ================ 00:14:56.295 Number of Power States: 1 00:14:56.295 Current Power State: Power State #0 00:14:56.295 Power State #0: 00:14:56.295 Max Power: 0.00 W 00:14:56.295 Non-Operational State: Operational 00:14:56.295 Entry Latency: Not Reported 00:14:56.295 Exit Latency: Not Reported 00:14:56.295 Relative Read Throughput: 0 00:14:56.295 Relative Read Latency: 0 00:14:56.295 Relative Write Throughput: 0 00:14:56.295 Relative Write Latency: 0 00:14:56.295 Idle Power: Not Reported 00:14:56.295 Active Power: Not Reported 00:14:56.295 Non-Operational Permissive Mode: Not Supported 00:14:56.295 00:14:56.295 Health Information 00:14:56.295 ================== 00:14:56.295 Critical Warnings: 00:14:56.295 Available Spare Space: OK 00:14:56.295 Temperature: OK 00:14:56.295 Device Reliability: OK 00:14:56.295 Read Only: No 00:14:56.295 Volatile Memory Backup: OK 00:14:56.295 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:56.295 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:56.295 Available Spare: 0% 00:14:56.295 Available Sp[2024-11-06 13:56:42.525214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:56.295 [2024-11-06 13:56:42.525222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:56.295 [2024-11-06 13:56:42.525243] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:56.295 [2024-11-06 13:56:42.525250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.295 [2024-11-06 13:56:42.525255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.295 [2024-11-06 13:56:42.525259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.295 [2024-11-06 13:56:42.525264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.295 [2024-11-06 13:56:42.525518] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:56.295 [2024-11-06 13:56:42.525526] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:56.295 [2024-11-06 13:56:42.526519] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:56.295 [2024-11-06 13:56:42.526560] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:56.295 [2024-11-06 13:56:42.526565] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:56.295 [2024-11-06 13:56:42.527524] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:56.295 [2024-11-06 13:56:42.527534] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:56.295 [2024-11-06 13:56:42.527590] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:56.295 [2024-11-06 13:56:42.528546] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:56.556 are Threshold: 0% 00:14:56.556 Life Percentage Used: 0% 00:14:56.556 Data Units Read: 0 00:14:56.556 Data Units Written: 0 00:14:56.556 Host Read Commands: 0 00:14:56.556 Host Write Commands: 0 00:14:56.556 Controller Busy Time: 0 minutes 00:14:56.556 Power Cycles: 0 00:14:56.556 Power On Hours: 0 hours 00:14:56.556 Unsafe Shutdowns: 0 00:14:56.556 Unrecoverable Media Errors: 0 00:14:56.556 Lifetime Error Log Entries: 0 00:14:56.556 Warning Temperature Time: 0 minutes 00:14:56.556 Critical Temperature Time: 0 minutes 00:14:56.556 00:14:56.556 Number of Queues 00:14:56.556 ================ 00:14:56.556 Number of I/O Submission Queues: 127 00:14:56.556 Number of I/O Completion Queues: 127 00:14:56.556 00:14:56.556 Active Namespaces 00:14:56.556 ================= 00:14:56.556 Namespace ID:1 00:14:56.556 Error Recovery Timeout: Unlimited 00:14:56.556 Command Set Identifier: NVM (00h) 00:14:56.556 Deallocate: Supported 00:14:56.556 Deallocated/Unwritten Error: Not Supported 00:14:56.556 Deallocated Read Value: Unknown 00:14:56.556 Deallocate in Write Zeroes: Not Supported 00:14:56.556 Deallocated Guard Field: 0xFFFF 00:14:56.556 Flush: Supported 00:14:56.556 Reservation: Supported 00:14:56.556 Namespace Sharing Capabilities: Multiple Controllers 00:14:56.556 Size (in LBAs): 131072 (0GiB) 00:14:56.556 Capacity (in LBAs): 131072 (0GiB) 00:14:56.556 Utilization (in LBAs): 131072 (0GiB) 00:14:56.556 NGUID: B2C61E2E486E474B948300014474CD57 00:14:56.556 UUID: b2c61e2e-486e-474b-9483-00014474cd57 00:14:56.556 Thin Provisioning: Not Supported 00:14:56.556 Per-NS Atomic Units: Yes 00:14:56.556 Atomic Boundary Size (Normal): 0 00:14:56.556 Atomic Boundary Size (PFail): 0 00:14:56.556 Atomic Boundary Offset: 0 00:14:56.556 Maximum Single Source Range Length: 65535 00:14:56.556 Maximum Copy Length: 65535 00:14:56.556 Maximum Source Range Count: 1 00:14:56.556 NGUID/EUI64 Never Reused: No 00:14:56.556 Namespace Write Protected: No 00:14:56.556 Number of LBA Formats: 1 00:14:56.556 Current LBA Format: LBA Format #00 00:14:56.556 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:56.556 00:14:56.556 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:56.556 [2024-11-06 13:56:42.694394] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:01.847 Initializing NVMe Controllers 00:15:01.847 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:01.847 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:01.847 Initialization complete. Launching workers. 00:15:01.847 ======================================================== 00:15:01.847 Latency(us) 00:15:01.848 Device Information : IOPS MiB/s Average min max 00:15:01.848 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39965.98 156.12 3202.59 845.48 9709.65 00:15:01.848 ======================================================== 00:15:01.848 Total : 39965.98 156.12 3202.59 845.48 9709.65 00:15:01.848 00:15:01.848 [2024-11-06 13:56:47.710798] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:01.848 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:01.848 [2024-11-06 13:56:47.903642] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:07.139 Initializing NVMe Controllers 00:15:07.139 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:07.139 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:07.139 Initialization complete. Launching workers. 00:15:07.139 ======================================================== 00:15:07.139 Latency(us) 00:15:07.139 Device Information : IOPS MiB/s Average min max 00:15:07.139 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16027.10 62.61 7992.02 4986.66 10976.38 00:15:07.139 ======================================================== 00:15:07.139 Total : 16027.10 62.61 7992.02 4986.66 10976.38 00:15:07.139 00:15:07.139 [2024-11-06 13:56:52.945936] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:07.139 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:07.139 [2024-11-06 13:56:53.152814] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:12.427 [2024-11-06 13:56:58.225923] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:12.427 Initializing NVMe Controllers 00:15:12.427 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:12.427 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:12.427 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:12.427 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:12.427 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:12.427 Initialization complete. Launching workers. 00:15:12.427 Starting thread on core 2 00:15:12.427 Starting thread on core 3 00:15:12.428 Starting thread on core 1 00:15:12.428 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:12.428 [2024-11-06 13:56:58.473274] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:15.732 [2024-11-06 13:57:01.571881] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:15.732 Initializing NVMe Controllers 00:15:15.732 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:15.732 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:15.732 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:15.732 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:15.732 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:15.732 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:15.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:15.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:15.732 Initialization complete. Launching workers. 00:15:15.732 Starting thread on core 1 with urgent priority queue 00:15:15.732 Starting thread on core 2 with urgent priority queue 00:15:15.732 Starting thread on core 3 with urgent priority queue 00:15:15.732 Starting thread on core 0 with urgent priority queue 00:15:15.732 SPDK bdev Controller (SPDK1 ) core 0: 6096.00 IO/s 16.40 secs/100000 ios 00:15:15.732 SPDK bdev Controller (SPDK1 ) core 1: 7885.67 IO/s 12.68 secs/100000 ios 00:15:15.732 SPDK bdev Controller (SPDK1 ) core 2: 6888.00 IO/s 14.52 secs/100000 ios 00:15:15.732 SPDK bdev Controller (SPDK1 ) core 3: 6019.33 IO/s 16.61 secs/100000 ios 00:15:15.732 ======================================================== 00:15:15.732 00:15:15.732 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:15.732 [2024-11-06 13:57:01.810179] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:15.732 Initializing NVMe Controllers 00:15:15.732 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:15.732 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:15.732 Namespace ID: 1 size: 0GB 00:15:15.732 Initialization complete. 00:15:15.732 INFO: using host memory buffer for IO 00:15:15.732 Hello world! 00:15:15.732 [2024-11-06 13:57:01.847408] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:15.732 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:15.992 [2024-11-06 13:57:02.082257] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:16.932 Initializing NVMe Controllers 00:15:16.932 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:16.932 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:16.932 Initialization complete. Launching workers. 00:15:16.932 submit (in ns) avg, min, max = 5679.3, 2825.8, 3998528.3 00:15:16.932 complete (in ns) avg, min, max = 15851.7, 1625.8, 3997303.3 00:15:16.932 00:15:16.932 Submit histogram 00:15:16.932 ================ 00:15:16.932 Range in us Cumulative Count 00:15:16.932 2.813 - 2.827: 0.0048% ( 1) 00:15:16.932 2.827 - 2.840: 0.4216% ( 86) 00:15:16.932 2.840 - 2.853: 2.6508% ( 460) 00:15:16.932 2.853 - 2.867: 6.2418% ( 741) 00:15:16.932 2.867 - 2.880: 11.9263% ( 1173) 00:15:16.932 2.880 - 2.893: 17.8580% ( 1224) 00:15:16.932 2.893 - 2.907: 23.5038% ( 1165) 00:15:16.932 2.907 - 2.920: 28.8103% ( 1095) 00:15:16.932 2.920 - 2.933: 34.0974% ( 1091) 00:15:16.932 2.933 - 2.947: 39.1762% ( 1048) 00:15:16.932 2.947 - 2.960: 44.5021% ( 1099) 00:15:16.932 2.960 - 2.973: 50.1623% ( 1168) 00:15:16.932 2.973 - 2.987: 58.0422% ( 1626) 00:15:16.932 2.987 - 3.000: 67.1820% ( 1886) 00:15:16.932 3.000 - 3.013: 75.3913% ( 1694) 00:15:16.932 3.013 - 3.027: 81.9675% ( 1357) 00:15:16.932 3.027 - 3.040: 88.1900% ( 1284) 00:15:16.932 3.040 - 3.053: 93.1476% ( 1023) 00:15:16.932 3.053 - 3.067: 96.3993% ( 671) 00:15:16.932 3.067 - 3.080: 97.9937% ( 329) 00:15:16.932 3.080 - 3.093: 98.7885% ( 164) 00:15:16.932 3.093 - 3.107: 99.1955% ( 84) 00:15:16.932 3.107 - 3.120: 99.4330% ( 49) 00:15:16.932 3.120 - 3.133: 99.5638% ( 27) 00:15:16.932 3.133 - 3.147: 99.6268% ( 13) 00:15:16.932 3.147 - 3.160: 99.6511% ( 5) 00:15:16.932 3.187 - 3.200: 99.6608% ( 2) 00:15:16.932 3.387 - 3.400: 99.6656% ( 1) 00:15:16.932 3.467 - 3.493: 99.6705% ( 1) 00:15:16.932 3.707 - 3.733: 99.6753% ( 1) 00:15:16.932 3.920 - 3.947: 99.6802% ( 1) 00:15:16.932 4.053 - 4.080: 99.6850% ( 1) 00:15:16.932 4.240 - 4.267: 99.6898% ( 1) 00:15:16.932 4.373 - 4.400: 99.6995% ( 2) 00:15:16.932 4.533 - 4.560: 99.7092% ( 2) 00:15:16.932 4.560 - 4.587: 99.7141% ( 1) 00:15:16.932 4.587 - 4.613: 99.7189% ( 1) 00:15:16.932 4.667 - 4.693: 99.7238% ( 1) 00:15:16.932 4.720 - 4.747: 99.7286% ( 1) 00:15:16.932 4.747 - 4.773: 99.7335% ( 1) 00:15:16.932 4.800 - 4.827: 99.7480% ( 3) 00:15:16.932 4.907 - 4.933: 99.7625% ( 3) 00:15:16.932 4.933 - 4.960: 99.7674% ( 1) 00:15:16.932 4.960 - 4.987: 99.7771% ( 2) 00:15:16.932 4.987 - 5.013: 99.7965% ( 4) 00:15:16.932 5.013 - 5.040: 99.8013% ( 1) 00:15:16.932 5.040 - 5.067: 99.8207% ( 4) 00:15:16.932 5.067 - 5.093: 99.8352% ( 3) 00:15:16.932 5.173 - 5.200: 99.8401% ( 1) 00:15:16.932 5.253 - 5.280: 99.8449% ( 1) 00:15:16.932 5.440 - 5.467: 99.8498% ( 1) 00:15:16.932 5.467 - 5.493: 99.8595% ( 2) 00:15:16.932 5.493 - 5.520: 99.8643% ( 1) 00:15:16.932 5.520 - 5.547: 99.8740% ( 2) 00:15:16.932 5.733 - 5.760: 99.8788% ( 1) 00:15:16.932 5.813 - 5.840: 99.8837% ( 1) 00:15:16.932 5.867 - 5.893: 99.8885% ( 1) 00:15:16.932 5.893 - 5.920: 99.8934% ( 1) 00:15:16.932 5.920 - 5.947: 99.8982% ( 1) 00:15:16.932 5.973 - 6.000: 99.9031% ( 1) 00:15:16.932 6.107 - 6.133: 99.9079% ( 1) 00:15:16.932 6.160 - 6.187: 99.9176% ( 2) 00:15:16.932 6.213 - 6.240: 99.9225% ( 1) 00:15:16.932 6.400 - 6.427: 99.9273% ( 1) 00:15:16.932 6.773 - 6.800: 99.9322% ( 1) 00:15:16.932 3986.773 - 4014.080: 100.0000% ( 14) 00:15:16.932 00:15:16.932 Complete histogram 00:15:16.932 ================== 00:15:16.932 Range in us Cumulative Count 00:15:16.932 1.620 - 1.627: 0.0048% ( 1) 00:15:16.932 1.627 - 1.633: 0.0097% ( 1) 00:15:16.932 1.633 - 1.640: 0.2811% ( 56) 00:15:16.932 1.640 - 1.647: 0.9159% ( 131) 00:15:16.932 1.647 - 1.653: 0.9595% ( 9) 00:15:16.932 1.653 - 1.660: 1.0904% ( 27) 00:15:16.932 1.660 - 1.667: 1.2164% ( 26) 00:15:16.932 1.667 - 1.673: 1.2503% ( 7) 00:15:16.932 1.673 - 1.680: 1.2842% ( 7) 00:15:16.932 1.680 - 1.687: 1.2988% ( 3) 00:15:16.932 1.687 - 1.693: 1.9191% ( 128) 00:15:16.932 1.693 - [2024-11-06 13:57:03.100761] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:16.932 1.700: 43.5231% ( 8585) 00:15:16.932 1.700 - 1.707: 52.4885% ( 1850) 00:15:16.932 1.707 - 1.720: 72.1735% ( 4062) 00:15:16.932 1.720 - 1.733: 81.2745% ( 1878) 00:15:16.932 1.733 - 1.747: 83.3196% ( 422) 00:15:16.932 1.747 - 1.760: 85.9704% ( 547) 00:15:16.932 1.760 - 1.773: 91.5920% ( 1160) 00:15:16.932 1.773 - 1.787: 96.1086% ( 932) 00:15:16.932 1.787 - 1.800: 98.4638% ( 486) 00:15:16.932 1.800 - 1.813: 99.3264% ( 178) 00:15:16.932 1.813 - 1.827: 99.4378% ( 23) 00:15:16.932 1.827 - 1.840: 99.4669% ( 6) 00:15:16.932 1.840 - 1.853: 99.4718% ( 1) 00:15:16.932 1.867 - 1.880: 99.4766% ( 1) 00:15:16.932 1.933 - 1.947: 99.4815% ( 1) 00:15:16.932 3.227 - 3.240: 99.4863% ( 1) 00:15:16.932 3.253 - 3.267: 99.4912% ( 1) 00:15:16.932 3.280 - 3.293: 99.4960% ( 1) 00:15:16.932 3.307 - 3.320: 99.5008% ( 1) 00:15:16.932 3.347 - 3.360: 99.5057% ( 1) 00:15:16.932 3.360 - 3.373: 99.5105% ( 1) 00:15:16.932 3.467 - 3.493: 99.5154% ( 1) 00:15:16.932 3.493 - 3.520: 99.5251% ( 2) 00:15:16.932 3.547 - 3.573: 99.5299% ( 1) 00:15:16.932 3.840 - 3.867: 99.5348% ( 1) 00:15:16.932 3.893 - 3.920: 99.5493% ( 3) 00:15:16.933 3.920 - 3.947: 99.5590% ( 2) 00:15:16.933 3.973 - 4.000: 99.5638% ( 1) 00:15:16.933 4.080 - 4.107: 99.5687% ( 1) 00:15:16.933 4.107 - 4.133: 99.5735% ( 1) 00:15:16.933 4.240 - 4.267: 99.5784% ( 1) 00:15:16.933 4.267 - 4.293: 99.5832% ( 1) 00:15:16.933 4.320 - 4.347: 99.5881% ( 1) 00:15:16.933 4.373 - 4.400: 99.5929% ( 1) 00:15:16.933 4.453 - 4.480: 99.5978% ( 1) 00:15:16.933 4.480 - 4.507: 99.6026% ( 1) 00:15:16.933 4.533 - 4.560: 99.6075% ( 1) 00:15:16.933 4.560 - 4.587: 99.6123% ( 1) 00:15:16.933 4.613 - 4.640: 99.6172% ( 1) 00:15:16.933 4.853 - 4.880: 99.6220% ( 1) 00:15:16.933 5.467 - 5.493: 99.6268% ( 1) 00:15:16.933 5.840 - 5.867: 99.6317% ( 1) 00:15:16.933 10.560 - 10.613: 99.6365% ( 1) 00:15:16.933 13.600 - 13.653: 99.6414% ( 1) 00:15:16.933 76.800 - 77.227: 99.6462% ( 1) 00:15:16.933 3959.467 - 3986.773: 99.6511% ( 1) 00:15:16.933 3986.773 - 4014.080: 100.0000% ( 72) 00:15:16.933 00:15:16.933 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:16.933 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:16.933 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:16.933 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:16.933 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:17.193 [ 00:15:17.193 { 00:15:17.193 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:17.193 "subtype": "Discovery", 00:15:17.193 "listen_addresses": [], 00:15:17.193 "allow_any_host": true, 00:15:17.193 "hosts": [] 00:15:17.193 }, 00:15:17.193 { 00:15:17.193 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:17.193 "subtype": "NVMe", 00:15:17.193 "listen_addresses": [ 00:15:17.193 { 00:15:17.193 "trtype": "VFIOUSER", 00:15:17.193 "adrfam": "IPv4", 00:15:17.193 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:17.193 "trsvcid": "0" 00:15:17.193 } 00:15:17.193 ], 00:15:17.193 "allow_any_host": true, 00:15:17.193 "hosts": [], 00:15:17.193 "serial_number": "SPDK1", 00:15:17.193 "model_number": "SPDK bdev Controller", 00:15:17.193 "max_namespaces": 32, 00:15:17.193 "min_cntlid": 1, 00:15:17.193 "max_cntlid": 65519, 00:15:17.193 "namespaces": [ 00:15:17.193 { 00:15:17.193 "nsid": 1, 00:15:17.193 "bdev_name": "Malloc1", 00:15:17.193 "name": "Malloc1", 00:15:17.193 "nguid": "B2C61E2E486E474B948300014474CD57", 00:15:17.193 "uuid": "b2c61e2e-486e-474b-9483-00014474cd57" 00:15:17.193 } 00:15:17.193 ] 00:15:17.193 }, 00:15:17.193 { 00:15:17.193 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:17.193 "subtype": "NVMe", 00:15:17.193 "listen_addresses": [ 00:15:17.193 { 00:15:17.193 "trtype": "VFIOUSER", 00:15:17.193 "adrfam": "IPv4", 00:15:17.193 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:17.193 "trsvcid": "0" 00:15:17.193 } 00:15:17.193 ], 00:15:17.193 "allow_any_host": true, 00:15:17.193 "hosts": [], 00:15:17.193 "serial_number": "SPDK2", 00:15:17.193 "model_number": "SPDK bdev Controller", 00:15:17.193 "max_namespaces": 32, 00:15:17.193 "min_cntlid": 1, 00:15:17.193 "max_cntlid": 65519, 00:15:17.193 "namespaces": [ 00:15:17.193 { 00:15:17.193 "nsid": 1, 00:15:17.193 "bdev_name": "Malloc2", 00:15:17.193 "name": "Malloc2", 00:15:17.193 "nguid": "E0277DAA13DD4CBBAC7E8BD1980F9004", 00:15:17.193 "uuid": "e0277daa-13dd-4cbb-ac7e-8bd1980f9004" 00:15:17.193 } 00:15:17.193 ] 00:15:17.193 } 00:15:17.193 ] 00:15:17.193 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:17.193 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:17.193 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2364215 00:15:17.193 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:17.193 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:17.193 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:17.193 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:17.193 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:17.193 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:17.193 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:17.454 [2024-11-06 13:57:03.483123] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.454 Malloc3 00:15:17.454 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:17.454 [2024-11-06 13:57:03.669403] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:17.454 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:17.454 Asynchronous Event Request test 00:15:17.454 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:17.454 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:17.454 Registering asynchronous event callbacks... 00:15:17.454 Starting namespace attribute notice tests for all controllers... 00:15:17.454 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:17.454 aer_cb - Changed Namespace 00:15:17.454 Cleaning up... 00:15:17.715 [ 00:15:17.715 { 00:15:17.715 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:17.715 "subtype": "Discovery", 00:15:17.715 "listen_addresses": [], 00:15:17.715 "allow_any_host": true, 00:15:17.715 "hosts": [] 00:15:17.715 }, 00:15:17.715 { 00:15:17.715 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:17.715 "subtype": "NVMe", 00:15:17.715 "listen_addresses": [ 00:15:17.715 { 00:15:17.715 "trtype": "VFIOUSER", 00:15:17.715 "adrfam": "IPv4", 00:15:17.715 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:17.715 "trsvcid": "0" 00:15:17.715 } 00:15:17.715 ], 00:15:17.715 "allow_any_host": true, 00:15:17.715 "hosts": [], 00:15:17.715 "serial_number": "SPDK1", 00:15:17.715 "model_number": "SPDK bdev Controller", 00:15:17.715 "max_namespaces": 32, 00:15:17.715 "min_cntlid": 1, 00:15:17.715 "max_cntlid": 65519, 00:15:17.715 "namespaces": [ 00:15:17.715 { 00:15:17.715 "nsid": 1, 00:15:17.715 "bdev_name": "Malloc1", 00:15:17.715 "name": "Malloc1", 00:15:17.715 "nguid": "B2C61E2E486E474B948300014474CD57", 00:15:17.715 "uuid": "b2c61e2e-486e-474b-9483-00014474cd57" 00:15:17.715 }, 00:15:17.715 { 00:15:17.715 "nsid": 2, 00:15:17.715 "bdev_name": "Malloc3", 00:15:17.715 "name": "Malloc3", 00:15:17.715 "nguid": "678207D009F34AD99549BE518DE3AA3D", 00:15:17.715 "uuid": "678207d0-09f3-4ad9-9549-be518de3aa3d" 00:15:17.715 } 00:15:17.715 ] 00:15:17.715 }, 00:15:17.715 { 00:15:17.715 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:17.715 "subtype": "NVMe", 00:15:17.715 "listen_addresses": [ 00:15:17.715 { 00:15:17.715 "trtype": "VFIOUSER", 00:15:17.715 "adrfam": "IPv4", 00:15:17.715 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:17.715 "trsvcid": "0" 00:15:17.715 } 00:15:17.715 ], 00:15:17.715 "allow_any_host": true, 00:15:17.715 "hosts": [], 00:15:17.715 "serial_number": "SPDK2", 00:15:17.715 "model_number": "SPDK bdev Controller", 00:15:17.715 "max_namespaces": 32, 00:15:17.715 "min_cntlid": 1, 00:15:17.715 "max_cntlid": 65519, 00:15:17.715 "namespaces": [ 00:15:17.715 { 00:15:17.715 "nsid": 1, 00:15:17.715 "bdev_name": "Malloc2", 00:15:17.715 "name": "Malloc2", 00:15:17.715 "nguid": "E0277DAA13DD4CBBAC7E8BD1980F9004", 00:15:17.715 "uuid": "e0277daa-13dd-4cbb-ac7e-8bd1980f9004" 00:15:17.715 } 00:15:17.715 ] 00:15:17.715 } 00:15:17.715 ] 00:15:17.715 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2364215 00:15:17.715 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:17.715 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:17.715 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:17.715 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:17.715 [2024-11-06 13:57:03.908706] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:15:17.715 [2024-11-06 13:57:03.908758] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364482 ] 00:15:17.715 [2024-11-06 13:57:03.947974] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:17.715 [2024-11-06 13:57:03.953170] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:17.715 [2024-11-06 13:57:03.953188] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa12671f000 00:15:17.715 [2024-11-06 13:57:03.954175] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:17.715 [2024-11-06 13:57:03.955182] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:17.715 [2024-11-06 13:57:03.956193] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:17.715 [2024-11-06 13:57:03.957194] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:17.715 [2024-11-06 13:57:03.958200] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:17.715 [2024-11-06 13:57:03.959210] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:17.715 [2024-11-06 13:57:03.960217] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:17.715 [2024-11-06 13:57:03.961224] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:17.715 [2024-11-06 13:57:03.962234] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:17.715 [2024-11-06 13:57:03.962242] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa126714000 00:15:17.715 [2024-11-06 13:57:03.963153] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:17.715 [2024-11-06 13:57:03.972535] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:17.715 [2024-11-06 13:57:03.972555] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:17.715 [2024-11-06 13:57:03.977622] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:17.715 [2024-11-06 13:57:03.977654] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:17.715 [2024-11-06 13:57:03.977715] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:17.715 [2024-11-06 13:57:03.977725] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:17.715 [2024-11-06 13:57:03.977729] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:17.716 [2024-11-06 13:57:03.978624] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:17.716 [2024-11-06 13:57:03.978631] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:17.716 [2024-11-06 13:57:03.978637] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:17.716 [2024-11-06 13:57:03.979629] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:17.716 [2024-11-06 13:57:03.979635] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:17.716 [2024-11-06 13:57:03.979641] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:17.716 [2024-11-06 13:57:03.980632] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:17.716 [2024-11-06 13:57:03.980639] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:17.716 [2024-11-06 13:57:03.981635] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:17.716 [2024-11-06 13:57:03.981642] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:17.716 [2024-11-06 13:57:03.981648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:17.716 [2024-11-06 13:57:03.981653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:17.716 [2024-11-06 13:57:03.981759] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:17.716 [2024-11-06 13:57:03.981762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:17.716 [2024-11-06 13:57:03.981766] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:17.716 [2024-11-06 13:57:03.982640] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:17.716 [2024-11-06 13:57:03.983648] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:17.716 [2024-11-06 13:57:03.984657] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:17.716 [2024-11-06 13:57:03.985658] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.716 [2024-11-06 13:57:03.985687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:17.716 [2024-11-06 13:57:03.986666] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:17.716 [2024-11-06 13:57:03.986673] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:17.716 [2024-11-06 13:57:03.986676] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:17.716 [2024-11-06 13:57:03.986691] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:17.716 [2024-11-06 13:57:03.986696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:17.716 [2024-11-06 13:57:03.986705] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:17.716 [2024-11-06 13:57:03.986709] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:17.716 [2024-11-06 13:57:03.986711] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:17.716 [2024-11-06 13:57:03.986721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:17.978 [2024-11-06 13:57:03.994752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:17.978 [2024-11-06 13:57:03.994762] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:17.978 [2024-11-06 13:57:03.994765] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:17.978 [2024-11-06 13:57:03.994769] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:17.978 [2024-11-06 13:57:03.994772] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:17.978 [2024-11-06 13:57:03.994777] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:17.978 [2024-11-06 13:57:03.994782] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:17.978 [2024-11-06 13:57:03.994786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:17.978 [2024-11-06 13:57:03.994793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:17.978 [2024-11-06 13:57:03.994801] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:17.978 [2024-11-06 13:57:04.002750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:17.978 [2024-11-06 13:57:04.002759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.978 [2024-11-06 13:57:04.002765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.978 [2024-11-06 13:57:04.002771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.978 [2024-11-06 13:57:04.002778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.978 [2024-11-06 13:57:04.002781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:17.978 [2024-11-06 13:57:04.002786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:17.978 [2024-11-06 13:57:04.002793] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:17.978 [2024-11-06 13:57:04.010751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:17.978 [2024-11-06 13:57:04.010760] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:17.978 [2024-11-06 13:57:04.010764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:17.978 [2024-11-06 13:57:04.010769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:17.979 [2024-11-06 13:57:04.010773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:17.979 [2024-11-06 13:57:04.010780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:17.979 [2024-11-06 13:57:04.018757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:17.979 [2024-11-06 13:57:04.018804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:17.979 [2024-11-06 13:57:04.018810] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:17.979 [2024-11-06 13:57:04.018816] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:17.979 [2024-11-06 13:57:04.018819] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:17.979 [2024-11-06 13:57:04.018822] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:17.979 [2024-11-06 13:57:04.018826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:17.979 [2024-11-06 13:57:04.026749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:17.979 [2024-11-06 13:57:04.026758] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:17.979 [2024-11-06 13:57:04.026766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:17.979 [2024-11-06 13:57:04.026772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:17.979 [2024-11-06 13:57:04.026777] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:17.979 [2024-11-06 13:57:04.026780] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:17.979 [2024-11-06 13:57:04.026782] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:17.979 [2024-11-06 13:57:04.026787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:17.979 [2024-11-06 13:57:04.034750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:17.979 [2024-11-06 13:57:04.034761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:17.979 [2024-11-06 13:57:04.034767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:17.979 [2024-11-06 13:57:04.034772] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:17.979 [2024-11-06 13:57:04.034775] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:17.979 [2024-11-06 13:57:04.034777] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:17.979 [2024-11-06 13:57:04.034782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:17.979 [2024-11-06 13:57:04.042751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:17.979 [2024-11-06 13:57:04.042758] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:17.979 [2024-11-06 13:57:04.042763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:17.979 [2024-11-06 13:57:04.042769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:17.979 [2024-11-06 13:57:04.042773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:17.979 [2024-11-06 13:57:04.042777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:17.979 [2024-11-06 13:57:04.042781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:17.979 [2024-11-06 13:57:04.042785] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:17.979 [2024-11-06 13:57:04.042788] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:17.979 [2024-11-06 13:57:04.042791] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:17.979 [2024-11-06 13:57:04.042805] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:17.979 [2024-11-06 13:57:04.050750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:17.979 [2024-11-06 13:57:04.050760] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:17.979 [2024-11-06 13:57:04.058749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:17.979 [2024-11-06 13:57:04.058759] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:17.979 [2024-11-06 13:57:04.066750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:17.979 [2024-11-06 13:57:04.066759] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:17.979 [2024-11-06 13:57:04.074750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:17.979 [2024-11-06 13:57:04.074761] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:17.979 [2024-11-06 13:57:04.074765] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:17.979 [2024-11-06 13:57:04.074767] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:17.979 [2024-11-06 13:57:04.074770] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:17.979 [2024-11-06 13:57:04.074772] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:17.979 [2024-11-06 13:57:04.074777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:17.979 [2024-11-06 13:57:04.074783] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:17.979 [2024-11-06 13:57:04.074786] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:17.979 [2024-11-06 13:57:04.074788] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:17.979 [2024-11-06 13:57:04.074793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:17.979 [2024-11-06 13:57:04.074798] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:17.979 [2024-11-06 13:57:04.074801] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:17.979 [2024-11-06 13:57:04.074803] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:17.979 [2024-11-06 13:57:04.074807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:17.979 [2024-11-06 13:57:04.074813] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:17.979 [2024-11-06 13:57:04.074816] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:17.979 [2024-11-06 13:57:04.074818] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:17.979 [2024-11-06 13:57:04.074822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:17.979 [2024-11-06 13:57:04.082749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:17.979 [2024-11-06 13:57:04.082760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:17.979 [2024-11-06 13:57:04.082769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:17.979 [2024-11-06 13:57:04.082774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:17.979 ===================================================== 00:15:17.979 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:17.979 ===================================================== 00:15:17.979 Controller Capabilities/Features 00:15:17.979 ================================ 00:15:17.979 Vendor ID: 4e58 00:15:17.979 Subsystem Vendor ID: 4e58 00:15:17.979 Serial Number: SPDK2 00:15:17.979 Model Number: SPDK bdev Controller 00:15:17.979 Firmware Version: 25.01 00:15:17.979 Recommended Arb Burst: 6 00:15:17.979 IEEE OUI Identifier: 8d 6b 50 00:15:17.979 Multi-path I/O 00:15:17.979 May have multiple subsystem ports: Yes 00:15:17.979 May have multiple controllers: Yes 00:15:17.979 Associated with SR-IOV VF: No 00:15:17.979 Max Data Transfer Size: 131072 00:15:17.979 Max Number of Namespaces: 32 00:15:17.979 Max Number of I/O Queues: 127 00:15:17.979 NVMe Specification Version (VS): 1.3 00:15:17.979 NVMe Specification Version (Identify): 1.3 00:15:17.979 Maximum Queue Entries: 256 00:15:17.979 Contiguous Queues Required: Yes 00:15:17.979 Arbitration Mechanisms Supported 00:15:17.979 Weighted Round Robin: Not Supported 00:15:17.979 Vendor Specific: Not Supported 00:15:17.979 Reset Timeout: 15000 ms 00:15:17.979 Doorbell Stride: 4 bytes 00:15:17.979 NVM Subsystem Reset: Not Supported 00:15:17.979 Command Sets Supported 00:15:17.979 NVM Command Set: Supported 00:15:17.979 Boot Partition: Not Supported 00:15:17.979 Memory Page Size Minimum: 4096 bytes 00:15:17.979 Memory Page Size Maximum: 4096 bytes 00:15:17.979 Persistent Memory Region: Not Supported 00:15:17.979 Optional Asynchronous Events Supported 00:15:17.979 Namespace Attribute Notices: Supported 00:15:17.979 Firmware Activation Notices: Not Supported 00:15:17.980 ANA Change Notices: Not Supported 00:15:17.980 PLE Aggregate Log Change Notices: Not Supported 00:15:17.980 LBA Status Info Alert Notices: Not Supported 00:15:17.980 EGE Aggregate Log Change Notices: Not Supported 00:15:17.980 Normal NVM Subsystem Shutdown event: Not Supported 00:15:17.980 Zone Descriptor Change Notices: Not Supported 00:15:17.980 Discovery Log Change Notices: Not Supported 00:15:17.980 Controller Attributes 00:15:17.980 128-bit Host Identifier: Supported 00:15:17.980 Non-Operational Permissive Mode: Not Supported 00:15:17.980 NVM Sets: Not Supported 00:15:17.980 Read Recovery Levels: Not Supported 00:15:17.980 Endurance Groups: Not Supported 00:15:17.980 Predictable Latency Mode: Not Supported 00:15:17.980 Traffic Based Keep ALive: Not Supported 00:15:17.980 Namespace Granularity: Not Supported 00:15:17.980 SQ Associations: Not Supported 00:15:17.980 UUID List: Not Supported 00:15:17.980 Multi-Domain Subsystem: Not Supported 00:15:17.980 Fixed Capacity Management: Not Supported 00:15:17.980 Variable Capacity Management: Not Supported 00:15:17.980 Delete Endurance Group: Not Supported 00:15:17.980 Delete NVM Set: Not Supported 00:15:17.980 Extended LBA Formats Supported: Not Supported 00:15:17.980 Flexible Data Placement Supported: Not Supported 00:15:17.980 00:15:17.980 Controller Memory Buffer Support 00:15:17.980 ================================ 00:15:17.980 Supported: No 00:15:17.980 00:15:17.980 Persistent Memory Region Support 00:15:17.980 ================================ 00:15:17.980 Supported: No 00:15:17.980 00:15:17.980 Admin Command Set Attributes 00:15:17.980 ============================ 00:15:17.980 Security Send/Receive: Not Supported 00:15:17.980 Format NVM: Not Supported 00:15:17.980 Firmware Activate/Download: Not Supported 00:15:17.980 Namespace Management: Not Supported 00:15:17.980 Device Self-Test: Not Supported 00:15:17.980 Directives: Not Supported 00:15:17.980 NVMe-MI: Not Supported 00:15:17.980 Virtualization Management: Not Supported 00:15:17.980 Doorbell Buffer Config: Not Supported 00:15:17.980 Get LBA Status Capability: Not Supported 00:15:17.980 Command & Feature Lockdown Capability: Not Supported 00:15:17.980 Abort Command Limit: 4 00:15:17.980 Async Event Request Limit: 4 00:15:17.980 Number of Firmware Slots: N/A 00:15:17.980 Firmware Slot 1 Read-Only: N/A 00:15:17.980 Firmware Activation Without Reset: N/A 00:15:17.980 Multiple Update Detection Support: N/A 00:15:17.980 Firmware Update Granularity: No Information Provided 00:15:17.980 Per-Namespace SMART Log: No 00:15:17.980 Asymmetric Namespace Access Log Page: Not Supported 00:15:17.980 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:17.980 Command Effects Log Page: Supported 00:15:17.980 Get Log Page Extended Data: Supported 00:15:17.980 Telemetry Log Pages: Not Supported 00:15:17.980 Persistent Event Log Pages: Not Supported 00:15:17.980 Supported Log Pages Log Page: May Support 00:15:17.980 Commands Supported & Effects Log Page: Not Supported 00:15:17.980 Feature Identifiers & Effects Log Page:May Support 00:15:17.980 NVMe-MI Commands & Effects Log Page: May Support 00:15:17.980 Data Area 4 for Telemetry Log: Not Supported 00:15:17.980 Error Log Page Entries Supported: 128 00:15:17.980 Keep Alive: Supported 00:15:17.980 Keep Alive Granularity: 10000 ms 00:15:17.980 00:15:17.980 NVM Command Set Attributes 00:15:17.980 ========================== 00:15:17.980 Submission Queue Entry Size 00:15:17.980 Max: 64 00:15:17.980 Min: 64 00:15:17.980 Completion Queue Entry Size 00:15:17.980 Max: 16 00:15:17.980 Min: 16 00:15:17.980 Number of Namespaces: 32 00:15:17.980 Compare Command: Supported 00:15:17.980 Write Uncorrectable Command: Not Supported 00:15:17.980 Dataset Management Command: Supported 00:15:17.980 Write Zeroes Command: Supported 00:15:17.980 Set Features Save Field: Not Supported 00:15:17.980 Reservations: Not Supported 00:15:17.980 Timestamp: Not Supported 00:15:17.980 Copy: Supported 00:15:17.980 Volatile Write Cache: Present 00:15:17.980 Atomic Write Unit (Normal): 1 00:15:17.980 Atomic Write Unit (PFail): 1 00:15:17.980 Atomic Compare & Write Unit: 1 00:15:17.980 Fused Compare & Write: Supported 00:15:17.980 Scatter-Gather List 00:15:17.980 SGL Command Set: Supported (Dword aligned) 00:15:17.980 SGL Keyed: Not Supported 00:15:17.980 SGL Bit Bucket Descriptor: Not Supported 00:15:17.980 SGL Metadata Pointer: Not Supported 00:15:17.980 Oversized SGL: Not Supported 00:15:17.980 SGL Metadata Address: Not Supported 00:15:17.980 SGL Offset: Not Supported 00:15:17.980 Transport SGL Data Block: Not Supported 00:15:17.980 Replay Protected Memory Block: Not Supported 00:15:17.980 00:15:17.980 Firmware Slot Information 00:15:17.980 ========================= 00:15:17.980 Active slot: 1 00:15:17.980 Slot 1 Firmware Revision: 25.01 00:15:17.980 00:15:17.980 00:15:17.980 Commands Supported and Effects 00:15:17.980 ============================== 00:15:17.980 Admin Commands 00:15:17.980 -------------- 00:15:17.980 Get Log Page (02h): Supported 00:15:17.980 Identify (06h): Supported 00:15:17.980 Abort (08h): Supported 00:15:17.980 Set Features (09h): Supported 00:15:17.980 Get Features (0Ah): Supported 00:15:17.980 Asynchronous Event Request (0Ch): Supported 00:15:17.980 Keep Alive (18h): Supported 00:15:17.980 I/O Commands 00:15:17.980 ------------ 00:15:17.980 Flush (00h): Supported LBA-Change 00:15:17.980 Write (01h): Supported LBA-Change 00:15:17.980 Read (02h): Supported 00:15:17.980 Compare (05h): Supported 00:15:17.980 Write Zeroes (08h): Supported LBA-Change 00:15:17.980 Dataset Management (09h): Supported LBA-Change 00:15:17.980 Copy (19h): Supported LBA-Change 00:15:17.980 00:15:17.980 Error Log 00:15:17.980 ========= 00:15:17.980 00:15:17.980 Arbitration 00:15:17.980 =========== 00:15:17.980 Arbitration Burst: 1 00:15:17.980 00:15:17.980 Power Management 00:15:17.980 ================ 00:15:17.980 Number of Power States: 1 00:15:17.980 Current Power State: Power State #0 00:15:17.980 Power State #0: 00:15:17.980 Max Power: 0.00 W 00:15:17.980 Non-Operational State: Operational 00:15:17.980 Entry Latency: Not Reported 00:15:17.980 Exit Latency: Not Reported 00:15:17.980 Relative Read Throughput: 0 00:15:17.980 Relative Read Latency: 0 00:15:17.980 Relative Write Throughput: 0 00:15:17.980 Relative Write Latency: 0 00:15:17.980 Idle Power: Not Reported 00:15:17.980 Active Power: Not Reported 00:15:17.980 Non-Operational Permissive Mode: Not Supported 00:15:17.980 00:15:17.980 Health Information 00:15:17.980 ================== 00:15:17.980 Critical Warnings: 00:15:17.980 Available Spare Space: OK 00:15:17.980 Temperature: OK 00:15:17.980 Device Reliability: OK 00:15:17.980 Read Only: No 00:15:17.980 Volatile Memory Backup: OK 00:15:17.980 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:17.980 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:17.980 Available Spare: 0% 00:15:17.980 Available Sp[2024-11-06 13:57:04.082847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:17.980 [2024-11-06 13:57:04.090750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:17.980 [2024-11-06 13:57:04.090772] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:17.980 [2024-11-06 13:57:04.090779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.980 [2024-11-06 13:57:04.090784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.980 [2024-11-06 13:57:04.090789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.980 [2024-11-06 13:57:04.090794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.980 [2024-11-06 13:57:04.090823] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:17.980 [2024-11-06 13:57:04.090830] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:17.980 [2024-11-06 13:57:04.091830] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.980 [2024-11-06 13:57:04.091867] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:17.980 [2024-11-06 13:57:04.091872] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:17.980 [2024-11-06 13:57:04.092841] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:17.981 [2024-11-06 13:57:04.092849] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:17.981 [2024-11-06 13:57:04.092891] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:17.981 [2024-11-06 13:57:04.093861] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:17.981 are Threshold: 0% 00:15:17.981 Life Percentage Used: 0% 00:15:17.981 Data Units Read: 0 00:15:17.981 Data Units Written: 0 00:15:17.981 Host Read Commands: 0 00:15:17.981 Host Write Commands: 0 00:15:17.981 Controller Busy Time: 0 minutes 00:15:17.981 Power Cycles: 0 00:15:17.981 Power On Hours: 0 hours 00:15:17.981 Unsafe Shutdowns: 0 00:15:17.981 Unrecoverable Media Errors: 0 00:15:17.981 Lifetime Error Log Entries: 0 00:15:17.981 Warning Temperature Time: 0 minutes 00:15:17.981 Critical Temperature Time: 0 minutes 00:15:17.981 00:15:17.981 Number of Queues 00:15:17.981 ================ 00:15:17.981 Number of I/O Submission Queues: 127 00:15:17.981 Number of I/O Completion Queues: 127 00:15:17.981 00:15:17.981 Active Namespaces 00:15:17.981 ================= 00:15:17.981 Namespace ID:1 00:15:17.981 Error Recovery Timeout: Unlimited 00:15:17.981 Command Set Identifier: NVM (00h) 00:15:17.981 Deallocate: Supported 00:15:17.981 Deallocated/Unwritten Error: Not Supported 00:15:17.981 Deallocated Read Value: Unknown 00:15:17.981 Deallocate in Write Zeroes: Not Supported 00:15:17.981 Deallocated Guard Field: 0xFFFF 00:15:17.981 Flush: Supported 00:15:17.981 Reservation: Supported 00:15:17.981 Namespace Sharing Capabilities: Multiple Controllers 00:15:17.981 Size (in LBAs): 131072 (0GiB) 00:15:17.981 Capacity (in LBAs): 131072 (0GiB) 00:15:17.981 Utilization (in LBAs): 131072 (0GiB) 00:15:17.981 NGUID: E0277DAA13DD4CBBAC7E8BD1980F9004 00:15:17.981 UUID: e0277daa-13dd-4cbb-ac7e-8bd1980f9004 00:15:17.981 Thin Provisioning: Not Supported 00:15:17.981 Per-NS Atomic Units: Yes 00:15:17.981 Atomic Boundary Size (Normal): 0 00:15:17.981 Atomic Boundary Size (PFail): 0 00:15:17.981 Atomic Boundary Offset: 0 00:15:17.981 Maximum Single Source Range Length: 65535 00:15:17.981 Maximum Copy Length: 65535 00:15:17.981 Maximum Source Range Count: 1 00:15:17.981 NGUID/EUI64 Never Reused: No 00:15:17.981 Namespace Write Protected: No 00:15:17.981 Number of LBA Formats: 1 00:15:17.981 Current LBA Format: LBA Format #00 00:15:17.981 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:17.981 00:15:17.981 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:18.241 [2024-11-06 13:57:04.283823] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:23.525 Initializing NVMe Controllers 00:15:23.525 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:23.525 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:23.525 Initialization complete. Launching workers. 00:15:23.525 ======================================================== 00:15:23.525 Latency(us) 00:15:23.525 Device Information : IOPS MiB/s Average min max 00:15:23.525 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40049.00 156.44 3198.47 837.60 8779.72 00:15:23.525 ======================================================== 00:15:23.525 Total : 40049.00 156.44 3198.47 837.60 8779.72 00:15:23.525 00:15:23.525 [2024-11-06 13:57:09.391947] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:23.525 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:23.525 [2024-11-06 13:57:09.587529] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:28.897 Initializing NVMe Controllers 00:15:28.897 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:28.897 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:28.897 Initialization complete. Launching workers. 00:15:28.897 ======================================================== 00:15:28.897 Latency(us) 00:15:28.897 Device Information : IOPS MiB/s Average min max 00:15:28.897 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39975.86 156.16 3201.80 845.65 7769.66 00:15:28.897 ======================================================== 00:15:28.897 Total : 39975.86 156.16 3201.80 845.65 7769.66 00:15:28.897 00:15:28.897 [2024-11-06 13:57:14.605074] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:28.897 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:28.897 [2024-11-06 13:57:14.808133] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.175 [2024-11-06 13:57:19.958823] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.175 Initializing NVMe Controllers 00:15:34.175 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:34.175 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:34.175 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:34.175 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:34.175 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:34.175 Initialization complete. Launching workers. 00:15:34.175 Starting thread on core 2 00:15:34.175 Starting thread on core 3 00:15:34.175 Starting thread on core 1 00:15:34.175 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:34.175 [2024-11-06 13:57:20.213226] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:37.471 [2024-11-06 13:57:23.275897] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:37.471 Initializing NVMe Controllers 00:15:37.471 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:37.471 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:37.471 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:37.471 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:37.471 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:37.471 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:37.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:37.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:37.471 Initialization complete. Launching workers. 00:15:37.471 Starting thread on core 1 with urgent priority queue 00:15:37.471 Starting thread on core 2 with urgent priority queue 00:15:37.471 Starting thread on core 3 with urgent priority queue 00:15:37.471 Starting thread on core 0 with urgent priority queue 00:15:37.471 SPDK bdev Controller (SPDK2 ) core 0: 5200.00 IO/s 19.23 secs/100000 ios 00:15:37.471 SPDK bdev Controller (SPDK2 ) core 1: 3153.67 IO/s 31.71 secs/100000 ios 00:15:37.471 SPDK bdev Controller (SPDK2 ) core 2: 7006.00 IO/s 14.27 secs/100000 ios 00:15:37.471 SPDK bdev Controller (SPDK2 ) core 3: 5351.67 IO/s 18.69 secs/100000 ios 00:15:37.471 ======================================================== 00:15:37.471 00:15:37.471 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:37.471 [2024-11-06 13:57:23.519144] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:37.471 Initializing NVMe Controllers 00:15:37.471 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:37.471 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:37.471 Namespace ID: 1 size: 0GB 00:15:37.471 Initialization complete. 00:15:37.471 INFO: using host memory buffer for IO 00:15:37.471 Hello world! 00:15:37.471 [2024-11-06 13:57:23.531231] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:37.471 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:37.731 [2024-11-06 13:57:23.761137] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:38.671 Initializing NVMe Controllers 00:15:38.671 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:38.671 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:38.671 Initialization complete. Launching workers. 00:15:38.671 submit (in ns) avg, min, max = 5878.3, 2815.8, 4004417.5 00:15:38.671 complete (in ns) avg, min, max = 16277.2, 1636.7, 4004665.8 00:15:38.671 00:15:38.671 Submit histogram 00:15:38.671 ================ 00:15:38.671 Range in us Cumulative Count 00:15:38.671 2.813 - 2.827: 0.5493% ( 113) 00:15:38.671 2.827 - 2.840: 1.9007% ( 278) 00:15:38.671 2.840 - 2.853: 4.9293% ( 623) 00:15:38.671 2.853 - 2.867: 9.7856% ( 999) 00:15:38.671 2.867 - 2.880: 15.0357% ( 1080) 00:15:38.671 2.880 - 2.893: 20.5532% ( 1135) 00:15:38.671 2.893 - 2.907: 25.0304% ( 921) 00:15:38.671 2.907 - 2.920: 30.5624% ( 1138) 00:15:38.671 2.920 - 2.933: 36.4348% ( 1208) 00:15:38.671 2.933 - 2.947: 41.7578% ( 1095) 00:15:38.671 2.947 - 2.960: 46.7163% ( 1020) 00:15:38.671 2.960 - 2.973: 52.8851% ( 1269) 00:15:38.671 2.973 - 2.987: 61.5186% ( 1776) 00:15:38.671 2.987 - 3.000: 70.6529% ( 1879) 00:15:38.671 3.000 - 3.013: 77.9447% ( 1500) 00:15:38.671 3.013 - 3.027: 84.5268% ( 1354) 00:15:38.671 3.027 - 3.040: 90.2435% ( 1176) 00:15:38.671 3.040 - 3.053: 94.6332% ( 903) 00:15:38.671 3.053 - 3.067: 97.3749% ( 564) 00:15:38.671 3.067 - 3.080: 98.5368% ( 239) 00:15:38.671 3.080 - 3.093: 99.1055% ( 117) 00:15:38.671 3.093 - 3.107: 99.4069% ( 62) 00:15:38.671 3.107 - 3.120: 99.4944% ( 18) 00:15:38.671 3.120 - 3.133: 99.5333% ( 8) 00:15:38.671 3.133 - 3.147: 99.5868% ( 11) 00:15:38.671 3.147 - 3.160: 99.5965% ( 2) 00:15:38.671 3.160 - 3.173: 99.6014% ( 1) 00:15:38.671 3.213 - 3.227: 99.6062% ( 1) 00:15:38.671 3.307 - 3.320: 99.6111% ( 1) 00:15:38.671 3.413 - 3.440: 99.6160% ( 1) 00:15:38.671 3.440 - 3.467: 99.6208% ( 1) 00:15:38.671 3.573 - 3.600: 99.6257% ( 1) 00:15:38.671 3.787 - 3.813: 99.6354% ( 2) 00:15:38.671 3.867 - 3.893: 99.6403% ( 1) 00:15:38.671 3.893 - 3.920: 99.6451% ( 1) 00:15:38.671 3.947 - 3.973: 99.6500% ( 1) 00:15:38.671 4.053 - 4.080: 99.6549% ( 1) 00:15:38.671 4.107 - 4.133: 99.6597% ( 1) 00:15:38.671 4.213 - 4.240: 99.6646% ( 1) 00:15:38.671 4.373 - 4.400: 99.6694% ( 1) 00:15:38.671 4.400 - 4.427: 99.6743% ( 1) 00:15:38.671 4.427 - 4.453: 99.6792% ( 1) 00:15:38.671 4.453 - 4.480: 99.6840% ( 1) 00:15:38.671 4.613 - 4.640: 99.6986% ( 3) 00:15:38.671 4.693 - 4.720: 99.7035% ( 1) 00:15:38.671 4.800 - 4.827: 99.7083% ( 1) 00:15:38.671 4.853 - 4.880: 99.7132% ( 1) 00:15:38.671 4.960 - 4.987: 99.7180% ( 1) 00:15:38.671 5.013 - 5.040: 99.7229% ( 1) 00:15:38.671 5.040 - 5.067: 99.7278% ( 1) 00:15:38.671 5.067 - 5.093: 99.7326% ( 1) 00:15:38.671 5.093 - 5.120: 99.7375% ( 1) 00:15:38.671 5.120 - 5.147: 99.7424% ( 1) 00:15:38.671 5.147 - 5.173: 99.7472% ( 1) 00:15:38.671 5.333 - 5.360: 99.7521% ( 1) 00:15:38.671 5.413 - 5.440: 99.7667% ( 3) 00:15:38.671 5.467 - 5.493: 99.7715% ( 1) 00:15:38.671 5.547 - 5.573: 99.7764% ( 1) 00:15:38.671 5.600 - 5.627: 99.7812% ( 1) 00:15:38.671 5.627 - 5.653: 99.7861% ( 1) 00:15:38.671 5.680 - 5.707: 99.8007% ( 3) 00:15:38.671 5.707 - 5.733: 99.8056% ( 1) 00:15:38.671 5.813 - 5.840: 99.8104% ( 1) 00:15:38.671 5.840 - 5.867: 99.8201% ( 2) 00:15:38.671 5.893 - 5.920: 99.8250% ( 1) 00:15:38.671 5.947 - 5.973: 99.8347% ( 2) 00:15:38.671 6.000 - 6.027: 99.8396% ( 1) 00:15:38.671 6.107 - 6.133: 99.8444% ( 1) 00:15:38.671 6.187 - 6.213: 99.8542% ( 2) 00:15:38.671 6.213 - 6.240: 99.8590% ( 1) 00:15:38.671 6.240 - 6.267: 99.8639% ( 1) 00:15:38.671 6.267 - 6.293: 99.8687% ( 1) 00:15:38.671 6.293 - 6.320: 99.8736% ( 1) 00:15:38.671 6.373 - 6.400: 99.8833% ( 2) 00:15:38.671 6.427 - 6.453: 99.8882% ( 1) 00:15:38.671 6.453 - 6.480: 99.8931% ( 1) 00:15:38.671 6.480 - 6.507: 99.8979% ( 1) 00:15:38.671 6.587 - 6.613: 99.9028% ( 1) 00:15:38.671 6.747 - 6.773: 99.9076% ( 1) 00:15:38.671 [2024-11-06 13:57:24.854277] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:38.671 6.987 - 7.040: 99.9125% ( 1) 00:15:38.671 7.040 - 7.093: 99.9174% ( 1) 00:15:38.671 7.147 - 7.200: 99.9222% ( 1) 00:15:38.671 7.573 - 7.627: 99.9271% ( 1) 00:15:38.671 3986.773 - 4014.080: 100.0000% ( 15) 00:15:38.672 00:15:38.672 Complete histogram 00:15:38.672 ================== 00:15:38.672 Range in us Cumulative Count 00:15:38.672 1.633 - 1.640: 0.0049% ( 1) 00:15:38.672 1.647 - 1.653: 0.8556% ( 175) 00:15:38.672 1.653 - 1.660: 1.0452% ( 39) 00:15:38.672 1.660 - 1.667: 1.1084% ( 13) 00:15:38.672 1.667 - 1.673: 1.3125% ( 42) 00:15:38.672 1.673 - 1.680: 1.4389% ( 26) 00:15:38.672 1.680 - 1.687: 1.4827% ( 9) 00:15:38.672 1.687 - 1.693: 1.5167% ( 7) 00:15:38.672 1.693 - 1.700: 1.6723% ( 32) 00:15:38.672 1.700 - 1.707: 39.2251% ( 7725) 00:15:38.672 1.707 - 1.720: 56.5019% ( 3554) 00:15:38.672 1.720 - 1.733: 74.3279% ( 3667) 00:15:38.672 1.733 - 1.747: 82.2080% ( 1621) 00:15:38.672 1.747 - 1.760: 83.6615% ( 299) 00:15:38.672 1.760 - 1.773: 87.5504% ( 800) 00:15:38.672 1.773 - 1.787: 93.0971% ( 1141) 00:15:38.672 1.787 - 1.800: 96.8597% ( 774) 00:15:38.672 1.800 - 1.813: 98.6875% ( 376) 00:15:38.672 1.813 - 1.827: 99.3340% ( 133) 00:15:38.672 1.827 - 1.840: 99.4701% ( 28) 00:15:38.672 1.840 - 1.853: 99.4896% ( 4) 00:15:38.672 3.520 - 3.547: 99.4944% ( 1) 00:15:38.672 3.573 - 3.600: 99.4993% ( 1) 00:15:38.672 3.627 - 3.653: 99.5042% ( 1) 00:15:38.672 3.653 - 3.680: 99.5139% ( 2) 00:15:38.672 4.027 - 4.053: 99.5187% ( 1) 00:15:38.672 4.107 - 4.133: 99.5333% ( 3) 00:15:38.672 4.160 - 4.187: 99.5382% ( 1) 00:15:38.672 4.213 - 4.240: 99.5430% ( 1) 00:15:38.672 4.320 - 4.347: 99.5479% ( 1) 00:15:38.672 4.453 - 4.480: 99.5528% ( 1) 00:15:38.672 4.533 - 4.560: 99.5576% ( 1) 00:15:38.672 4.640 - 4.667: 99.5625% ( 1) 00:15:38.672 4.667 - 4.693: 99.5674% ( 1) 00:15:38.672 4.800 - 4.827: 99.5722% ( 1) 00:15:38.672 4.880 - 4.907: 99.5771% ( 1) 00:15:38.672 4.907 - 4.933: 99.5868% ( 2) 00:15:38.672 5.013 - 5.040: 99.5917% ( 1) 00:15:38.672 5.040 - 5.067: 99.5965% ( 1) 00:15:38.672 5.120 - 5.147: 99.6014% ( 1) 00:15:38.672 5.173 - 5.200: 99.6062% ( 1) 00:15:38.672 5.200 - 5.227: 99.6111% ( 1) 00:15:38.672 5.227 - 5.253: 99.6160% ( 1) 00:15:38.672 5.360 - 5.387: 99.6208% ( 1) 00:15:38.672 5.413 - 5.440: 99.6257% ( 1) 00:15:38.672 5.813 - 5.840: 99.6305% ( 1) 00:15:38.672 6.933 - 6.987: 99.6354% ( 1) 00:15:38.672 3659.093 - 3686.400: 99.6403% ( 1) 00:15:38.672 3986.773 - 4014.080: 100.0000% ( 74) 00:15:38.672 00:15:38.672 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:38.672 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:38.672 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:38.672 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:38.672 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:38.933 [ 00:15:38.933 { 00:15:38.933 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:38.933 "subtype": "Discovery", 00:15:38.933 "listen_addresses": [], 00:15:38.933 "allow_any_host": true, 00:15:38.933 "hosts": [] 00:15:38.933 }, 00:15:38.933 { 00:15:38.933 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:38.933 "subtype": "NVMe", 00:15:38.933 "listen_addresses": [ 00:15:38.933 { 00:15:38.933 "trtype": "VFIOUSER", 00:15:38.933 "adrfam": "IPv4", 00:15:38.933 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:38.933 "trsvcid": "0" 00:15:38.933 } 00:15:38.933 ], 00:15:38.933 "allow_any_host": true, 00:15:38.933 "hosts": [], 00:15:38.933 "serial_number": "SPDK1", 00:15:38.933 "model_number": "SPDK bdev Controller", 00:15:38.933 "max_namespaces": 32, 00:15:38.933 "min_cntlid": 1, 00:15:38.933 "max_cntlid": 65519, 00:15:38.933 "namespaces": [ 00:15:38.933 { 00:15:38.933 "nsid": 1, 00:15:38.933 "bdev_name": "Malloc1", 00:15:38.933 "name": "Malloc1", 00:15:38.933 "nguid": "B2C61E2E486E474B948300014474CD57", 00:15:38.933 "uuid": "b2c61e2e-486e-474b-9483-00014474cd57" 00:15:38.933 }, 00:15:38.933 { 00:15:38.933 "nsid": 2, 00:15:38.933 "bdev_name": "Malloc3", 00:15:38.933 "name": "Malloc3", 00:15:38.933 "nguid": "678207D009F34AD99549BE518DE3AA3D", 00:15:38.933 "uuid": "678207d0-09f3-4ad9-9549-be518de3aa3d" 00:15:38.933 } 00:15:38.933 ] 00:15:38.933 }, 00:15:38.933 { 00:15:38.933 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:38.933 "subtype": "NVMe", 00:15:38.933 "listen_addresses": [ 00:15:38.933 { 00:15:38.933 "trtype": "VFIOUSER", 00:15:38.933 "adrfam": "IPv4", 00:15:38.933 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:38.933 "trsvcid": "0" 00:15:38.933 } 00:15:38.933 ], 00:15:38.933 "allow_any_host": true, 00:15:38.933 "hosts": [], 00:15:38.933 "serial_number": "SPDK2", 00:15:38.933 "model_number": "SPDK bdev Controller", 00:15:38.933 "max_namespaces": 32, 00:15:38.933 "min_cntlid": 1, 00:15:38.933 "max_cntlid": 65519, 00:15:38.933 "namespaces": [ 00:15:38.933 { 00:15:38.933 "nsid": 1, 00:15:38.933 "bdev_name": "Malloc2", 00:15:38.933 "name": "Malloc2", 00:15:38.933 "nguid": "E0277DAA13DD4CBBAC7E8BD1980F9004", 00:15:38.933 "uuid": "e0277daa-13dd-4cbb-ac7e-8bd1980f9004" 00:15:38.933 } 00:15:38.933 ] 00:15:38.933 } 00:15:38.933 ] 00:15:38.933 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:38.933 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:38.933 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2369022 00:15:38.933 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:38.933 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:38.933 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:38.933 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:38.933 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:38.933 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:38.933 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:39.193 [2024-11-06 13:57:25.217690] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:39.193 Malloc4 00:15:39.193 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:39.193 [2024-11-06 13:57:25.424086] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:39.193 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:39.193 Asynchronous Event Request test 00:15:39.193 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:39.193 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:39.193 Registering asynchronous event callbacks... 00:15:39.193 Starting namespace attribute notice tests for all controllers... 00:15:39.193 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:39.194 aer_cb - Changed Namespace 00:15:39.194 Cleaning up... 00:15:39.454 [ 00:15:39.454 { 00:15:39.454 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:39.454 "subtype": "Discovery", 00:15:39.454 "listen_addresses": [], 00:15:39.454 "allow_any_host": true, 00:15:39.454 "hosts": [] 00:15:39.454 }, 00:15:39.454 { 00:15:39.454 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:39.454 "subtype": "NVMe", 00:15:39.454 "listen_addresses": [ 00:15:39.454 { 00:15:39.454 "trtype": "VFIOUSER", 00:15:39.454 "adrfam": "IPv4", 00:15:39.454 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:39.454 "trsvcid": "0" 00:15:39.454 } 00:15:39.454 ], 00:15:39.454 "allow_any_host": true, 00:15:39.454 "hosts": [], 00:15:39.454 "serial_number": "SPDK1", 00:15:39.454 "model_number": "SPDK bdev Controller", 00:15:39.454 "max_namespaces": 32, 00:15:39.454 "min_cntlid": 1, 00:15:39.454 "max_cntlid": 65519, 00:15:39.454 "namespaces": [ 00:15:39.454 { 00:15:39.454 "nsid": 1, 00:15:39.454 "bdev_name": "Malloc1", 00:15:39.454 "name": "Malloc1", 00:15:39.454 "nguid": "B2C61E2E486E474B948300014474CD57", 00:15:39.454 "uuid": "b2c61e2e-486e-474b-9483-00014474cd57" 00:15:39.454 }, 00:15:39.454 { 00:15:39.454 "nsid": 2, 00:15:39.454 "bdev_name": "Malloc3", 00:15:39.454 "name": "Malloc3", 00:15:39.454 "nguid": "678207D009F34AD99549BE518DE3AA3D", 00:15:39.454 "uuid": "678207d0-09f3-4ad9-9549-be518de3aa3d" 00:15:39.454 } 00:15:39.454 ] 00:15:39.454 }, 00:15:39.454 { 00:15:39.454 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:39.454 "subtype": "NVMe", 00:15:39.454 "listen_addresses": [ 00:15:39.454 { 00:15:39.454 "trtype": "VFIOUSER", 00:15:39.454 "adrfam": "IPv4", 00:15:39.454 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:39.454 "trsvcid": "0" 00:15:39.454 } 00:15:39.454 ], 00:15:39.454 "allow_any_host": true, 00:15:39.454 "hosts": [], 00:15:39.454 "serial_number": "SPDK2", 00:15:39.454 "model_number": "SPDK bdev Controller", 00:15:39.454 "max_namespaces": 32, 00:15:39.455 "min_cntlid": 1, 00:15:39.455 "max_cntlid": 65519, 00:15:39.455 "namespaces": [ 00:15:39.455 { 00:15:39.455 "nsid": 1, 00:15:39.455 "bdev_name": "Malloc2", 00:15:39.455 "name": "Malloc2", 00:15:39.455 "nguid": "E0277DAA13DD4CBBAC7E8BD1980F9004", 00:15:39.455 "uuid": "e0277daa-13dd-4cbb-ac7e-8bd1980f9004" 00:15:39.455 }, 00:15:39.455 { 00:15:39.455 "nsid": 2, 00:15:39.455 "bdev_name": "Malloc4", 00:15:39.455 "name": "Malloc4", 00:15:39.455 "nguid": "E36DBF1BA76349B89E3AF6A1A236AEA1", 00:15:39.455 "uuid": "e36dbf1b-a763-49b8-9e3a-f6a1a236aea1" 00:15:39.455 } 00:15:39.455 ] 00:15:39.455 } 00:15:39.455 ] 00:15:39.455 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2369022 00:15:39.455 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:39.455 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2359376 00:15:39.455 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 2359376 ']' 00:15:39.455 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 2359376 00:15:39.455 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:39.455 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:39.455 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2359376 00:15:39.455 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:39.455 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:39.455 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2359376' 00:15:39.455 killing process with pid 2359376 00:15:39.455 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 2359376 00:15:39.455 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 2359376 00:15:39.716 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:39.716 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:39.716 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:39.716 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:39.716 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:39.716 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2369060 00:15:39.716 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:39.716 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2369060' 00:15:39.716 Process pid: 2369060 00:15:39.716 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:39.716 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2369060 00:15:39.716 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 2369060 ']' 00:15:39.716 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.716 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:39.716 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.716 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:39.717 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:39.717 [2024-11-06 13:57:25.901910] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:39.717 [2024-11-06 13:57:25.902843] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:15:39.717 [2024-11-06 13:57:25.902887] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.717 [2024-11-06 13:57:25.990576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:39.977 [2024-11-06 13:57:26.025139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.977 [2024-11-06 13:57:26.025169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.977 [2024-11-06 13:57:26.025175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.977 [2024-11-06 13:57:26.025180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.977 [2024-11-06 13:57:26.025184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.977 [2024-11-06 13:57:26.026539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.977 [2024-11-06 13:57:26.026725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.977 [2024-11-06 13:57:26.026929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.977 [2024-11-06 13:57:26.027023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.977 [2024-11-06 13:57:26.081039] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:39.977 [2024-11-06 13:57:26.082120] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:39.977 [2024-11-06 13:57:26.083223] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:39.977 [2024-11-06 13:57:26.083523] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:39.977 [2024-11-06 13:57:26.083558] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:40.549 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:40.549 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:15:40.549 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:41.489 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:41.750 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:41.750 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:41.750 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:41.750 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:41.750 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:42.011 Malloc1 00:15:42.011 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:42.271 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:42.271 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:42.533 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:42.533 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:42.533 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:42.793 Malloc2 00:15:42.793 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:43.055 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:43.055 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:43.316 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:43.316 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2369060 00:15:43.316 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 2369060 ']' 00:15:43.316 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 2369060 00:15:43.316 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:43.316 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:43.316 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2369060 00:15:43.316 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:43.316 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:43.316 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2369060' 00:15:43.316 killing process with pid 2369060 00:15:43.316 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 2369060 00:15:43.316 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 2369060 00:15:43.583 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:43.583 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:43.583 00:15:43.583 real 0m51.004s 00:15:43.583 user 3m15.351s 00:15:43.583 sys 0m2.708s 00:15:43.583 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:43.583 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:43.583 ************************************ 00:15:43.583 END TEST nvmf_vfio_user 00:15:43.583 ************************************ 00:15:43.583 13:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:43.583 13:57:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:43.583 13:57:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:43.583 13:57:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:43.583 ************************************ 00:15:43.583 START TEST nvmf_vfio_user_nvme_compliance 00:15:43.583 ************************************ 00:15:43.583 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:43.583 * Looking for test storage... 00:15:43.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:43.583 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:43.583 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:15:43.583 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:43.846 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:43.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.847 --rc genhtml_branch_coverage=1 00:15:43.847 --rc genhtml_function_coverage=1 00:15:43.847 --rc genhtml_legend=1 00:15:43.847 --rc geninfo_all_blocks=1 00:15:43.847 --rc geninfo_unexecuted_blocks=1 00:15:43.847 00:15:43.847 ' 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:43.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.847 --rc genhtml_branch_coverage=1 00:15:43.847 --rc genhtml_function_coverage=1 00:15:43.847 --rc genhtml_legend=1 00:15:43.847 --rc geninfo_all_blocks=1 00:15:43.847 --rc geninfo_unexecuted_blocks=1 00:15:43.847 00:15:43.847 ' 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:43.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.847 --rc genhtml_branch_coverage=1 00:15:43.847 --rc genhtml_function_coverage=1 00:15:43.847 --rc genhtml_legend=1 00:15:43.847 --rc geninfo_all_blocks=1 00:15:43.847 --rc geninfo_unexecuted_blocks=1 00:15:43.847 00:15:43.847 ' 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:43.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.847 --rc genhtml_branch_coverage=1 00:15:43.847 --rc genhtml_function_coverage=1 00:15:43.847 --rc genhtml_legend=1 00:15:43.847 --rc geninfo_all_blocks=1 00:15:43.847 --rc geninfo_unexecuted_blocks=1 00:15:43.847 00:15:43.847 ' 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:43.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2370081 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2370081' 00:15:43.847 Process pid: 2370081 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2370081 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 2370081 ']' 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:43.847 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.848 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:43.848 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:43.848 [2024-11-06 13:57:30.040461] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:15:43.848 [2024-11-06 13:57:30.040524] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.848 [2024-11-06 13:57:30.101049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:44.108 [2024-11-06 13:57:30.134232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.108 [2024-11-06 13:57:30.134264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.108 [2024-11-06 13:57:30.134270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.108 [2024-11-06 13:57:30.134275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.108 [2024-11-06 13:57:30.134279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.108 [2024-11-06 13:57:30.135435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.108 [2024-11-06 13:57:30.135558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.108 [2024-11-06 13:57:30.135559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.108 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:44.108 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:15:44.108 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:45.051 malloc0 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.051 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:45.312 00:15:45.312 00:15:45.312 CUnit - A unit testing framework for C - Version 2.1-3 00:15:45.312 http://cunit.sourceforge.net/ 00:15:45.312 00:15:45.312 00:15:45.312 Suite: nvme_compliance 00:15:45.312 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-06 13:57:31.466218] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.312 [2024-11-06 13:57:31.467511] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:45.312 [2024-11-06 13:57:31.467521] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:45.312 [2024-11-06 13:57:31.467526] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:45.312 [2024-11-06 13:57:31.469238] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.312 passed 00:15:45.312 Test: admin_identify_ctrlr_verify_fused ...[2024-11-06 13:57:31.544756] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.312 [2024-11-06 13:57:31.547762] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.312 passed 00:15:45.573 Test: admin_identify_ns ...[2024-11-06 13:57:31.627131] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.573 [2024-11-06 13:57:31.687758] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:45.573 [2024-11-06 13:57:31.695754] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:45.573 [2024-11-06 13:57:31.716839] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.573 passed 00:15:45.573 Test: admin_get_features_mandatory_features ...[2024-11-06 13:57:31.788097] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.573 [2024-11-06 13:57:31.791111] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.573 passed 00:15:45.833 Test: admin_get_features_optional_features ...[2024-11-06 13:57:31.868598] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.833 [2024-11-06 13:57:31.871621] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.833 passed 00:15:45.833 Test: admin_set_features_number_of_queues ...[2024-11-06 13:57:31.945124] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.833 [2024-11-06 13:57:32.048845] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.833 passed 00:15:46.093 Test: admin_get_log_page_mandatory_logs ...[2024-11-06 13:57:32.124898] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.093 [2024-11-06 13:57:32.127909] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.093 passed 00:15:46.093 Test: admin_get_log_page_with_lpo ...[2024-11-06 13:57:32.205132] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.093 [2024-11-06 13:57:32.273756] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:46.093 [2024-11-06 13:57:32.286804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.093 passed 00:15:46.093 Test: fabric_property_get ...[2024-11-06 13:57:32.360019] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.093 [2024-11-06 13:57:32.361223] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:46.093 [2024-11-06 13:57:32.363032] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.353 passed 00:15:46.353 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-06 13:57:32.439510] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.353 [2024-11-06 13:57:32.440708] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:46.353 [2024-11-06 13:57:32.442533] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.353 passed 00:15:46.353 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-06 13:57:32.518273] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.353 [2024-11-06 13:57:32.602750] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:46.353 [2024-11-06 13:57:32.618749] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:46.353 [2024-11-06 13:57:32.623823] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.613 passed 00:15:46.613 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-06 13:57:32.697024] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.613 [2024-11-06 13:57:32.698220] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:46.613 [2024-11-06 13:57:32.700038] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.613 passed 00:15:46.613 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-06 13:57:32.775175] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.613 [2024-11-06 13:57:32.854750] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:46.613 [2024-11-06 13:57:32.878753] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:46.613 [2024-11-06 13:57:32.883821] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.873 passed 00:15:46.873 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-06 13:57:32.956076] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.873 [2024-11-06 13:57:32.957279] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:46.873 [2024-11-06 13:57:32.957297] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:46.873 [2024-11-06 13:57:32.960098] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.873 passed 00:15:46.873 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-06 13:57:33.035838] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.873 [2024-11-06 13:57:33.129753] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:46.873 [2024-11-06 13:57:33.137752] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:46.873 [2024-11-06 13:57:33.145753] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:47.134 [2024-11-06 13:57:33.153754] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:47.134 [2024-11-06 13:57:33.182826] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.134 passed 00:15:47.134 Test: admin_create_io_sq_verify_pc ...[2024-11-06 13:57:33.255051] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.134 [2024-11-06 13:57:33.281756] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:47.134 [2024-11-06 13:57:33.299262] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.134 passed 00:15:47.134 Test: admin_create_io_qp_max_qps ...[2024-11-06 13:57:33.374737] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.517 [2024-11-06 13:57:34.484754] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:48.777 [2024-11-06 13:57:34.875387] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.777 passed 00:15:48.777 Test: admin_create_io_sq_shared_cq ...[2024-11-06 13:57:34.949164] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.037 [2024-11-06 13:57:35.081754] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:49.037 [2024-11-06 13:57:35.118796] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.037 passed 00:15:49.037 00:15:49.037 Run Summary: Type Total Ran Passed Failed Inactive 00:15:49.037 suites 1 1 n/a 0 0 00:15:49.037 tests 18 18 18 0 0 00:15:49.037 asserts 360 360 360 0 n/a 00:15:49.037 00:15:49.037 Elapsed time = 1.504 seconds 00:15:49.037 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2370081 00:15:49.037 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 2370081 ']' 00:15:49.037 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 2370081 00:15:49.037 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:15:49.037 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:49.037 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2370081 00:15:49.037 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:49.037 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:49.037 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2370081' 00:15:49.037 killing process with pid 2370081 00:15:49.038 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 2370081 00:15:49.038 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 2370081 00:15:49.298 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:49.298 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:49.298 00:15:49.298 real 0m5.599s 00:15:49.298 user 0m15.740s 00:15:49.298 sys 0m0.481s 00:15:49.298 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:49.298 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:49.298 ************************************ 00:15:49.298 END TEST nvmf_vfio_user_nvme_compliance 00:15:49.298 ************************************ 00:15:49.298 13:57:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:49.298 13:57:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:49.298 13:57:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:49.298 13:57:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:49.298 ************************************ 00:15:49.298 START TEST nvmf_vfio_user_fuzz 00:15:49.298 ************************************ 00:15:49.298 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:49.298 * Looking for test storage... 00:15:49.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.298 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:49.298 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:49.298 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:49.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.560 --rc genhtml_branch_coverage=1 00:15:49.560 --rc genhtml_function_coverage=1 00:15:49.560 --rc genhtml_legend=1 00:15:49.560 --rc geninfo_all_blocks=1 00:15:49.560 --rc geninfo_unexecuted_blocks=1 00:15:49.560 00:15:49.560 ' 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:49.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.560 --rc genhtml_branch_coverage=1 00:15:49.560 --rc genhtml_function_coverage=1 00:15:49.560 --rc genhtml_legend=1 00:15:49.560 --rc geninfo_all_blocks=1 00:15:49.560 --rc geninfo_unexecuted_blocks=1 00:15:49.560 00:15:49.560 ' 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:49.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.560 --rc genhtml_branch_coverage=1 00:15:49.560 --rc genhtml_function_coverage=1 00:15:49.560 --rc genhtml_legend=1 00:15:49.560 --rc geninfo_all_blocks=1 00:15:49.560 --rc geninfo_unexecuted_blocks=1 00:15:49.560 00:15:49.560 ' 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:49.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.560 --rc genhtml_branch_coverage=1 00:15:49.560 --rc genhtml_function_coverage=1 00:15:49.560 --rc genhtml_legend=1 00:15:49.560 --rc geninfo_all_blocks=1 00:15:49.560 --rc geninfo_unexecuted_blocks=1 00:15:49.560 00:15:49.560 ' 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.560 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:49.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2371197 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2371197' 00:15:49.561 Process pid: 2371197 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2371197 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 2371197 ']' 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:49.561 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:50.502 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:50.502 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:15:50.502 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:51.446 malloc0 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:51.446 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:23.563 Fuzzing completed. Shutting down the fuzz application 00:16:23.563 00:16:23.563 Dumping successful admin opcodes: 00:16:23.563 8, 9, 10, 24, 00:16:23.563 Dumping successful io opcodes: 00:16:23.563 0, 00:16:23.563 NS: 0x20000081ef00 I/O qp, Total commands completed: 1359810, total successful commands: 5329, random_seed: 1082465984 00:16:23.563 NS: 0x20000081ef00 admin qp, Total commands completed: 301353, total successful commands: 2419, random_seed: 309652736 00:16:23.563 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:23.563 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.563 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:23.563 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.563 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2371197 00:16:23.563 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 2371197 ']' 00:16:23.563 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 2371197 00:16:23.563 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:16:23.563 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:23.563 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2371197 00:16:23.563 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:23.563 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:23.563 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2371197' 00:16:23.563 killing process with pid 2371197 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 2371197 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 2371197 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:23.564 00:16:23.564 real 0m32.809s 00:16:23.564 user 0m38.184s 00:16:23.564 sys 0m23.409s 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:23.564 ************************************ 00:16:23.564 END TEST nvmf_vfio_user_fuzz 00:16:23.564 ************************************ 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:23.564 ************************************ 00:16:23.564 START TEST nvmf_auth_target 00:16:23.564 ************************************ 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:23.564 * Looking for test storage... 00:16:23.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:23.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.564 --rc genhtml_branch_coverage=1 00:16:23.564 --rc genhtml_function_coverage=1 00:16:23.564 --rc genhtml_legend=1 00:16:23.564 --rc geninfo_all_blocks=1 00:16:23.564 --rc geninfo_unexecuted_blocks=1 00:16:23.564 00:16:23.564 ' 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:23.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.564 --rc genhtml_branch_coverage=1 00:16:23.564 --rc genhtml_function_coverage=1 00:16:23.564 --rc genhtml_legend=1 00:16:23.564 --rc geninfo_all_blocks=1 00:16:23.564 --rc geninfo_unexecuted_blocks=1 00:16:23.564 00:16:23.564 ' 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:23.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.564 --rc genhtml_branch_coverage=1 00:16:23.564 --rc genhtml_function_coverage=1 00:16:23.564 --rc genhtml_legend=1 00:16:23.564 --rc geninfo_all_blocks=1 00:16:23.564 --rc geninfo_unexecuted_blocks=1 00:16:23.564 00:16:23.564 ' 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:23.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.564 --rc genhtml_branch_coverage=1 00:16:23.564 --rc genhtml_function_coverage=1 00:16:23.564 --rc genhtml_legend=1 00:16:23.564 --rc geninfo_all_blocks=1 00:16:23.564 --rc geninfo_unexecuted_blocks=1 00:16:23.564 00:16:23.564 ' 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.564 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:23.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:23.565 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:30.150 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:30.150 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:30.150 Found net devices under 0000:31:00.0: cvl_0_0 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:30.150 Found net devices under 0000:31:00.1: cvl_0_1 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:30.150 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:30.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:16:30.151 00:16:30.151 --- 10.0.0.2 ping statistics --- 00:16:30.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.151 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:16:30.151 00:16:30.151 --- 10.0.0.1 ping statistics --- 00:16:30.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.151 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2381211 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2381211 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2381211 ']' 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:30.151 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.723 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:30.723 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:30.723 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:30.723 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:30.723 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2381436 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3c3aad3fd84b2005ab2f0a77b76587024ce539f651f6450a 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.pnP 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3c3aad3fd84b2005ab2f0a77b76587024ce539f651f6450a 0 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3c3aad3fd84b2005ab2f0a77b76587024ce539f651f6450a 0 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3c3aad3fd84b2005ab2f0a77b76587024ce539f651f6450a 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.pnP 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.pnP 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.pnP 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:30.985 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a63eea88d0530e56f1988bcc5caabd2d425b8d45f3164e3d9b3926ee7e550e7f 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.NiU 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a63eea88d0530e56f1988bcc5caabd2d425b8d45f3164e3d9b3926ee7e550e7f 3 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a63eea88d0530e56f1988bcc5caabd2d425b8d45f3164e3d9b3926ee7e550e7f 3 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a63eea88d0530e56f1988bcc5caabd2d425b8d45f3164e3d9b3926ee7e550e7f 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.NiU 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.NiU 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.NiU 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a5df6d5b5c0dbcfa9e35b482c16ee6dd 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.zIN 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a5df6d5b5c0dbcfa9e35b482c16ee6dd 1 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a5df6d5b5c0dbcfa9e35b482c16ee6dd 1 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a5df6d5b5c0dbcfa9e35b482c16ee6dd 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.zIN 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.zIN 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.zIN 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:30.986 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=88b4857aebfbf181fa76fbc264303c18463078f2d0e2a8ad 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4Ok 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 88b4857aebfbf181fa76fbc264303c18463078f2d0e2a8ad 2 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 88b4857aebfbf181fa76fbc264303c18463078f2d0e2a8ad 2 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=88b4857aebfbf181fa76fbc264303c18463078f2d0e2a8ad 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4Ok 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4Ok 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.4Ok 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=917b829be6b7342ef45e18ff4f893e26bd52fd8c704f2d16 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.cl5 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 917b829be6b7342ef45e18ff4f893e26bd52fd8c704f2d16 2 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 917b829be6b7342ef45e18ff4f893e26bd52fd8c704f2d16 2 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=917b829be6b7342ef45e18ff4f893e26bd52fd8c704f2d16 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.cl5 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.cl5 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.cl5 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:31.247 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a6ee6f4b628f384ac02b08dd22b100b1 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Uh4 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a6ee6f4b628f384ac02b08dd22b100b1 1 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a6ee6f4b628f384ac02b08dd22b100b1 1 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a6ee6f4b628f384ac02b08dd22b100b1 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Uh4 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Uh4 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Uh4 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2213d5304adc6bd6af28cba904811f9d58702729cf7214e50f054c7ed69d4e35 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.WQw 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2213d5304adc6bd6af28cba904811f9d58702729cf7214e50f054c7ed69d4e35 3 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2213d5304adc6bd6af28cba904811f9d58702729cf7214e50f054c7ed69d4e35 3 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2213d5304adc6bd6af28cba904811f9d58702729cf7214e50f054c7ed69d4e35 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:31.248 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.WQw 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.WQw 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.WQw 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2381211 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2381211 ']' 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2381436 /var/tmp/host.sock 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2381436 ']' 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:31.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:31.509 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.769 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:31.769 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:31.769 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:31.769 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.769 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.769 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.769 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:31.769 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pnP 00:16:31.769 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.769 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.769 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.769 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.pnP 00:16:31.769 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.pnP 00:16:32.029 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.NiU ]] 00:16:32.029 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NiU 00:16:32.029 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.029 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.029 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.029 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NiU 00:16:32.029 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NiU 00:16:32.289 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:32.289 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.zIN 00:16:32.289 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.289 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.289 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.289 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.zIN 00:16:32.289 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.zIN 00:16:32.289 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.4Ok ]] 00:16:32.289 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4Ok 00:16:32.289 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.289 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.289 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.289 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4Ok 00:16:32.289 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4Ok 00:16:32.549 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:32.549 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.cl5 00:16:32.549 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.549 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.549 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.549 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.cl5 00:16:32.549 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.cl5 00:16:32.809 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Uh4 ]] 00:16:32.809 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Uh4 00:16:32.809 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.809 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.809 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.809 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Uh4 00:16:32.809 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Uh4 00:16:32.809 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:32.809 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.WQw 00:16:32.809 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.809 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.809 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.809 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.WQw 00:16:32.809 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.WQw 00:16:33.069 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:33.069 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:33.069 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.069 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.069 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:33.069 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:33.331 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:33.331 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.331 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.331 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:33.331 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:33.331 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.331 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.331 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.331 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.331 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.331 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.331 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.331 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.590 00:16:33.590 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.590 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.590 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.850 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.850 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.850 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.850 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.850 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.850 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.850 { 00:16:33.850 "cntlid": 1, 00:16:33.850 "qid": 0, 00:16:33.850 "state": "enabled", 00:16:33.850 "thread": "nvmf_tgt_poll_group_000", 00:16:33.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:33.850 "listen_address": { 00:16:33.850 "trtype": "TCP", 00:16:33.850 "adrfam": "IPv4", 00:16:33.850 "traddr": "10.0.0.2", 00:16:33.850 "trsvcid": "4420" 00:16:33.850 }, 00:16:33.850 "peer_address": { 00:16:33.850 "trtype": "TCP", 00:16:33.850 "adrfam": "IPv4", 00:16:33.850 "traddr": "10.0.0.1", 00:16:33.850 "trsvcid": "42696" 00:16:33.850 }, 00:16:33.850 "auth": { 00:16:33.850 "state": "completed", 00:16:33.850 "digest": "sha256", 00:16:33.850 "dhgroup": "null" 00:16:33.850 } 00:16:33.850 } 00:16:33.850 ]' 00:16:33.850 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.850 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.850 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.850 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:33.850 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.850 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.850 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.850 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.111 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:16:34.111 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:16:34.681 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.681 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:34.681 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.681 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.681 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.681 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.681 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:34.681 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:34.940 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:34.940 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.940 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.940 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:34.940 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.940 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.940 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.940 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.940 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.940 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.940 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.941 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.941 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.200 00:16:35.200 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.200 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.200 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.460 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.460 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.460 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.460 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.460 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.460 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.460 { 00:16:35.460 "cntlid": 3, 00:16:35.460 "qid": 0, 00:16:35.460 "state": "enabled", 00:16:35.460 "thread": "nvmf_tgt_poll_group_000", 00:16:35.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:35.460 "listen_address": { 00:16:35.460 "trtype": "TCP", 00:16:35.460 "adrfam": "IPv4", 00:16:35.460 "traddr": "10.0.0.2", 00:16:35.460 "trsvcid": "4420" 00:16:35.460 }, 00:16:35.460 "peer_address": { 00:16:35.460 "trtype": "TCP", 00:16:35.460 "adrfam": "IPv4", 00:16:35.460 "traddr": "10.0.0.1", 00:16:35.460 "trsvcid": "42730" 00:16:35.460 }, 00:16:35.460 "auth": { 00:16:35.460 "state": "completed", 00:16:35.460 "digest": "sha256", 00:16:35.460 "dhgroup": "null" 00:16:35.460 } 00:16:35.460 } 00:16:35.460 ]' 00:16:35.460 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.460 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.460 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.460 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:35.460 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.460 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.461 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.461 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.720 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:16:35.720 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:16:36.291 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.291 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:36.291 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.291 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.551 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.551 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.551 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:36.551 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:36.551 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:36.551 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.551 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.551 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:36.551 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.551 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.551 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.551 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.551 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.551 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.552 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.552 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.552 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.812 00:16:36.812 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.812 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.812 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.073 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.073 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.073 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.073 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.073 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.073 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.073 { 00:16:37.073 "cntlid": 5, 00:16:37.073 "qid": 0, 00:16:37.073 "state": "enabled", 00:16:37.073 "thread": "nvmf_tgt_poll_group_000", 00:16:37.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:37.073 "listen_address": { 00:16:37.073 "trtype": "TCP", 00:16:37.073 "adrfam": "IPv4", 00:16:37.073 "traddr": "10.0.0.2", 00:16:37.073 "trsvcid": "4420" 00:16:37.073 }, 00:16:37.073 "peer_address": { 00:16:37.073 "trtype": "TCP", 00:16:37.073 "adrfam": "IPv4", 00:16:37.073 "traddr": "10.0.0.1", 00:16:37.073 "trsvcid": "55706" 00:16:37.073 }, 00:16:37.073 "auth": { 00:16:37.073 "state": "completed", 00:16:37.073 "digest": "sha256", 00:16:37.073 "dhgroup": "null" 00:16:37.073 } 00:16:37.073 } 00:16:37.073 ]' 00:16:37.073 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.073 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.073 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.073 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:37.073 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.073 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.073 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.073 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.333 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:16:37.333 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.273 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.533 00:16:38.533 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.533 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.533 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.794 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.794 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.794 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.794 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.794 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.794 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.794 { 00:16:38.794 "cntlid": 7, 00:16:38.794 "qid": 0, 00:16:38.794 "state": "enabled", 00:16:38.794 "thread": "nvmf_tgt_poll_group_000", 00:16:38.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:38.794 "listen_address": { 00:16:38.794 "trtype": "TCP", 00:16:38.794 "adrfam": "IPv4", 00:16:38.794 "traddr": "10.0.0.2", 00:16:38.794 "trsvcid": "4420" 00:16:38.794 }, 00:16:38.794 "peer_address": { 00:16:38.794 "trtype": "TCP", 00:16:38.794 "adrfam": "IPv4", 00:16:38.794 "traddr": "10.0.0.1", 00:16:38.794 "trsvcid": "55730" 00:16:38.794 }, 00:16:38.794 "auth": { 00:16:38.794 "state": "completed", 00:16:38.794 "digest": "sha256", 00:16:38.794 "dhgroup": "null" 00:16:38.794 } 00:16:38.794 } 00:16:38.794 ]' 00:16:38.794 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.794 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.794 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.794 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:38.794 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.794 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.794 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.794 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.054 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:16:39.054 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:16:39.624 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.624 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:39.624 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.624 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.624 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.624 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.624 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.624 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:39.624 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:39.884 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:39.884 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.884 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.884 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:39.884 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.884 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.884 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.884 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.884 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.884 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.884 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.885 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.885 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.144 00:16:40.144 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.144 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.144 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.144 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.145 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.145 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.145 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.145 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.145 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.145 { 00:16:40.145 "cntlid": 9, 00:16:40.145 "qid": 0, 00:16:40.145 "state": "enabled", 00:16:40.145 "thread": "nvmf_tgt_poll_group_000", 00:16:40.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:40.145 "listen_address": { 00:16:40.145 "trtype": "TCP", 00:16:40.145 "adrfam": "IPv4", 00:16:40.145 "traddr": "10.0.0.2", 00:16:40.145 "trsvcid": "4420" 00:16:40.145 }, 00:16:40.145 "peer_address": { 00:16:40.145 "trtype": "TCP", 00:16:40.145 "adrfam": "IPv4", 00:16:40.145 "traddr": "10.0.0.1", 00:16:40.145 "trsvcid": "55748" 00:16:40.145 }, 00:16:40.145 "auth": { 00:16:40.145 "state": "completed", 00:16:40.145 "digest": "sha256", 00:16:40.145 "dhgroup": "ffdhe2048" 00:16:40.145 } 00:16:40.145 } 00:16:40.145 ]' 00:16:40.145 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.405 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.405 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.405 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:40.405 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.405 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.405 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.405 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.672 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:16:40.672 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:16:41.324 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.324 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:41.324 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.324 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.324 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.324 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.324 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.324 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.324 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:41.324 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.324 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.324 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:41.324 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.324 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.325 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.325 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.325 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.325 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.325 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.325 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.325 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.609 00:16:41.609 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.609 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.609 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.869 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.869 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.869 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.869 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.869 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.869 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.869 { 00:16:41.869 "cntlid": 11, 00:16:41.869 "qid": 0, 00:16:41.869 "state": "enabled", 00:16:41.869 "thread": "nvmf_tgt_poll_group_000", 00:16:41.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:41.869 "listen_address": { 00:16:41.869 "trtype": "TCP", 00:16:41.869 "adrfam": "IPv4", 00:16:41.869 "traddr": "10.0.0.2", 00:16:41.869 "trsvcid": "4420" 00:16:41.869 }, 00:16:41.869 "peer_address": { 00:16:41.869 "trtype": "TCP", 00:16:41.869 "adrfam": "IPv4", 00:16:41.869 "traddr": "10.0.0.1", 00:16:41.869 "trsvcid": "55790" 00:16:41.869 }, 00:16:41.869 "auth": { 00:16:41.869 "state": "completed", 00:16:41.869 "digest": "sha256", 00:16:41.869 "dhgroup": "ffdhe2048" 00:16:41.869 } 00:16:41.869 } 00:16:41.869 ]' 00:16:41.869 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.869 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.869 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.869 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:41.869 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.869 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.869 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.869 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.130 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:16:42.130 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:16:42.700 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.960 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:42.960 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.960 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.960 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.960 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.960 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:42.960 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:42.960 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:42.960 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.960 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.960 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:42.960 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.960 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.960 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.960 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.960 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.960 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.960 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.960 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.960 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.220 00:16:43.220 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.220 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.220 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.480 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.480 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.480 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.480 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.480 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.480 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.480 { 00:16:43.480 "cntlid": 13, 00:16:43.480 "qid": 0, 00:16:43.480 "state": "enabled", 00:16:43.480 "thread": "nvmf_tgt_poll_group_000", 00:16:43.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:43.480 "listen_address": { 00:16:43.480 "trtype": "TCP", 00:16:43.480 "adrfam": "IPv4", 00:16:43.480 "traddr": "10.0.0.2", 00:16:43.480 "trsvcid": "4420" 00:16:43.480 }, 00:16:43.480 "peer_address": { 00:16:43.480 "trtype": "TCP", 00:16:43.480 "adrfam": "IPv4", 00:16:43.480 "traddr": "10.0.0.1", 00:16:43.480 "trsvcid": "55808" 00:16:43.480 }, 00:16:43.480 "auth": { 00:16:43.480 "state": "completed", 00:16:43.480 "digest": "sha256", 00:16:43.480 "dhgroup": "ffdhe2048" 00:16:43.480 } 00:16:43.480 } 00:16:43.480 ]' 00:16:43.480 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.480 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.480 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.480 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:43.480 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.480 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.480 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.480 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.740 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:16:43.740 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:16:44.309 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.310 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:44.310 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.310 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.310 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.310 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.310 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:44.310 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:44.571 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:44.571 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.571 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.571 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:44.571 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.571 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.571 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:44.571 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.571 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.571 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.571 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.571 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.571 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.831 00:16:44.831 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.831 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.831 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.091 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.091 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.091 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.091 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.091 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.091 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.091 { 00:16:45.091 "cntlid": 15, 00:16:45.091 "qid": 0, 00:16:45.091 "state": "enabled", 00:16:45.091 "thread": "nvmf_tgt_poll_group_000", 00:16:45.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:45.091 "listen_address": { 00:16:45.091 "trtype": "TCP", 00:16:45.091 "adrfam": "IPv4", 00:16:45.092 "traddr": "10.0.0.2", 00:16:45.092 "trsvcid": "4420" 00:16:45.092 }, 00:16:45.092 "peer_address": { 00:16:45.092 "trtype": "TCP", 00:16:45.092 "adrfam": "IPv4", 00:16:45.092 "traddr": "10.0.0.1", 00:16:45.092 "trsvcid": "55838" 00:16:45.092 }, 00:16:45.092 "auth": { 00:16:45.092 "state": "completed", 00:16:45.092 "digest": "sha256", 00:16:45.092 "dhgroup": "ffdhe2048" 00:16:45.092 } 00:16:45.092 } 00:16:45.092 ]' 00:16:45.092 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.092 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.092 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.092 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.092 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.092 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.092 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.092 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.352 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:16:45.352 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:16:45.922 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.922 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:45.922 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.922 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.922 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.922 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.922 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.922 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.922 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:46.182 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:46.182 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.182 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.182 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:46.182 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.182 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.182 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.182 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.182 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.182 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.182 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.182 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.182 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.442 00:16:46.442 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.442 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.442 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.703 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.703 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.703 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.703 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.703 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.703 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.703 { 00:16:46.703 "cntlid": 17, 00:16:46.703 "qid": 0, 00:16:46.703 "state": "enabled", 00:16:46.703 "thread": "nvmf_tgt_poll_group_000", 00:16:46.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:46.703 "listen_address": { 00:16:46.703 "trtype": "TCP", 00:16:46.703 "adrfam": "IPv4", 00:16:46.703 "traddr": "10.0.0.2", 00:16:46.703 "trsvcid": "4420" 00:16:46.703 }, 00:16:46.703 "peer_address": { 00:16:46.703 "trtype": "TCP", 00:16:46.703 "adrfam": "IPv4", 00:16:46.703 "traddr": "10.0.0.1", 00:16:46.703 "trsvcid": "55862" 00:16:46.703 }, 00:16:46.703 "auth": { 00:16:46.703 "state": "completed", 00:16:46.703 "digest": "sha256", 00:16:46.703 "dhgroup": "ffdhe3072" 00:16:46.703 } 00:16:46.703 } 00:16:46.703 ]' 00:16:46.703 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.703 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.703 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.703 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.703 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.703 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.703 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.703 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.963 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:16:46.963 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:16:47.533 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.533 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:47.533 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.533 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.793 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.793 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.793 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.793 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.793 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:47.793 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.793 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.793 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:47.793 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:47.793 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.793 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.793 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.793 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.793 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.794 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.794 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.794 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.054 00:16:48.054 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.054 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.054 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.314 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.314 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.314 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.314 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.314 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.314 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.314 { 00:16:48.314 "cntlid": 19, 00:16:48.314 "qid": 0, 00:16:48.314 "state": "enabled", 00:16:48.314 "thread": "nvmf_tgt_poll_group_000", 00:16:48.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:48.314 "listen_address": { 00:16:48.314 "trtype": "TCP", 00:16:48.314 "adrfam": "IPv4", 00:16:48.314 "traddr": "10.0.0.2", 00:16:48.314 "trsvcid": "4420" 00:16:48.314 }, 00:16:48.314 "peer_address": { 00:16:48.314 "trtype": "TCP", 00:16:48.314 "adrfam": "IPv4", 00:16:48.314 "traddr": "10.0.0.1", 00:16:48.314 "trsvcid": "41182" 00:16:48.314 }, 00:16:48.314 "auth": { 00:16:48.314 "state": "completed", 00:16:48.314 "digest": "sha256", 00:16:48.314 "dhgroup": "ffdhe3072" 00:16:48.314 } 00:16:48.314 } 00:16:48.314 ]' 00:16:48.314 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.314 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.314 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.314 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.314 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.314 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.314 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.314 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.575 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:16:48.575 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:16:49.146 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.146 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:49.146 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.146 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.406 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.406 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.406 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.406 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.406 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:49.406 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.406 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.406 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:49.406 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:49.406 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.406 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.406 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.406 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.406 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.406 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.406 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.406 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.666 00:16:49.666 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.666 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.666 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.925 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.925 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.925 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.925 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.926 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.926 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.926 { 00:16:49.926 "cntlid": 21, 00:16:49.926 "qid": 0, 00:16:49.926 "state": "enabled", 00:16:49.926 "thread": "nvmf_tgt_poll_group_000", 00:16:49.926 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:49.926 "listen_address": { 00:16:49.926 "trtype": "TCP", 00:16:49.926 "adrfam": "IPv4", 00:16:49.926 "traddr": "10.0.0.2", 00:16:49.926 "trsvcid": "4420" 00:16:49.926 }, 00:16:49.926 "peer_address": { 00:16:49.926 "trtype": "TCP", 00:16:49.926 "adrfam": "IPv4", 00:16:49.926 "traddr": "10.0.0.1", 00:16:49.926 "trsvcid": "41210" 00:16:49.926 }, 00:16:49.926 "auth": { 00:16:49.926 "state": "completed", 00:16:49.926 "digest": "sha256", 00:16:49.926 "dhgroup": "ffdhe3072" 00:16:49.926 } 00:16:49.926 } 00:16:49.926 ]' 00:16:49.926 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.926 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.926 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.926 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:49.926 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.926 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.926 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.926 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.186 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:16:50.186 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:16:50.757 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.757 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:50.757 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.757 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.757 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.757 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.757 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.757 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.018 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:51.018 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.018 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.018 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:51.018 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:51.018 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.018 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:51.018 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.018 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.018 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.018 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:51.018 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.018 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.279 00:16:51.279 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.279 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.279 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.539 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.539 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.539 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.539 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.539 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.539 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.539 { 00:16:51.539 "cntlid": 23, 00:16:51.539 "qid": 0, 00:16:51.539 "state": "enabled", 00:16:51.539 "thread": "nvmf_tgt_poll_group_000", 00:16:51.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:51.539 "listen_address": { 00:16:51.539 "trtype": "TCP", 00:16:51.539 "adrfam": "IPv4", 00:16:51.539 "traddr": "10.0.0.2", 00:16:51.539 "trsvcid": "4420" 00:16:51.539 }, 00:16:51.539 "peer_address": { 00:16:51.539 "trtype": "TCP", 00:16:51.539 "adrfam": "IPv4", 00:16:51.539 "traddr": "10.0.0.1", 00:16:51.539 "trsvcid": "41234" 00:16:51.539 }, 00:16:51.539 "auth": { 00:16:51.539 "state": "completed", 00:16:51.539 "digest": "sha256", 00:16:51.539 "dhgroup": "ffdhe3072" 00:16:51.539 } 00:16:51.539 } 00:16:51.539 ]' 00:16:51.539 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.539 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.539 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.539 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.539 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.539 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.539 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.539 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.799 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:16:51.799 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:16:52.370 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.370 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:52.370 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.370 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.370 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.370 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.370 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.370 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.370 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.631 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:52.631 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.631 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.631 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:52.631 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:52.631 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.631 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.631 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.631 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.631 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.631 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.631 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.632 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.892 00:16:52.892 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.892 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.892 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.151 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.151 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.151 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.151 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.151 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.151 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.151 { 00:16:53.151 "cntlid": 25, 00:16:53.151 "qid": 0, 00:16:53.151 "state": "enabled", 00:16:53.151 "thread": "nvmf_tgt_poll_group_000", 00:16:53.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:53.151 "listen_address": { 00:16:53.151 "trtype": "TCP", 00:16:53.151 "adrfam": "IPv4", 00:16:53.151 "traddr": "10.0.0.2", 00:16:53.151 "trsvcid": "4420" 00:16:53.151 }, 00:16:53.151 "peer_address": { 00:16:53.151 "trtype": "TCP", 00:16:53.151 "adrfam": "IPv4", 00:16:53.151 "traddr": "10.0.0.1", 00:16:53.151 "trsvcid": "41270" 00:16:53.151 }, 00:16:53.151 "auth": { 00:16:53.151 "state": "completed", 00:16:53.151 "digest": "sha256", 00:16:53.151 "dhgroup": "ffdhe4096" 00:16:53.151 } 00:16:53.151 } 00:16:53.151 ]' 00:16:53.151 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.151 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.151 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.151 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.151 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.411 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.411 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.411 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.411 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:16:53.411 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.350 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.610 00:16:54.610 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.610 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.610 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.870 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.870 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.870 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.870 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.870 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.870 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.870 { 00:16:54.870 "cntlid": 27, 00:16:54.870 "qid": 0, 00:16:54.870 "state": "enabled", 00:16:54.870 "thread": "nvmf_tgt_poll_group_000", 00:16:54.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:54.870 "listen_address": { 00:16:54.870 "trtype": "TCP", 00:16:54.870 "adrfam": "IPv4", 00:16:54.870 "traddr": "10.0.0.2", 00:16:54.870 "trsvcid": "4420" 00:16:54.870 }, 00:16:54.870 "peer_address": { 00:16:54.870 "trtype": "TCP", 00:16:54.870 "adrfam": "IPv4", 00:16:54.870 "traddr": "10.0.0.1", 00:16:54.870 "trsvcid": "41292" 00:16:54.870 }, 00:16:54.870 "auth": { 00:16:54.870 "state": "completed", 00:16:54.870 "digest": "sha256", 00:16:54.870 "dhgroup": "ffdhe4096" 00:16:54.870 } 00:16:54.870 } 00:16:54.870 ]' 00:16:54.870 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.870 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.871 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.871 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:54.871 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.871 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.871 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.871 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.131 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:16:55.131 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:16:55.701 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.701 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:55.701 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.701 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.701 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.701 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.701 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:55.701 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:55.961 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:55.961 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.961 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:55.961 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:55.961 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:55.961 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.961 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.961 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.961 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.961 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.961 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.961 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.961 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.221 00:16:56.222 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.222 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.222 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.483 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.483 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.483 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.483 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.483 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.483 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.483 { 00:16:56.483 "cntlid": 29, 00:16:56.483 "qid": 0, 00:16:56.483 "state": "enabled", 00:16:56.483 "thread": "nvmf_tgt_poll_group_000", 00:16:56.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:56.483 "listen_address": { 00:16:56.483 "trtype": "TCP", 00:16:56.483 "adrfam": "IPv4", 00:16:56.483 "traddr": "10.0.0.2", 00:16:56.483 "trsvcid": "4420" 00:16:56.483 }, 00:16:56.483 "peer_address": { 00:16:56.483 "trtype": "TCP", 00:16:56.483 "adrfam": "IPv4", 00:16:56.483 "traddr": "10.0.0.1", 00:16:56.483 "trsvcid": "41328" 00:16:56.483 }, 00:16:56.483 "auth": { 00:16:56.483 "state": "completed", 00:16:56.483 "digest": "sha256", 00:16:56.483 "dhgroup": "ffdhe4096" 00:16:56.483 } 00:16:56.483 } 00:16:56.483 ]' 00:16:56.483 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.483 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.483 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.483 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:56.483 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.744 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.744 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.744 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.744 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:16:56.744 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.685 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.946 00:16:57.946 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.946 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.946 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.206 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.206 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.206 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.206 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.206 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.206 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.206 { 00:16:58.206 "cntlid": 31, 00:16:58.206 "qid": 0, 00:16:58.206 "state": "enabled", 00:16:58.206 "thread": "nvmf_tgt_poll_group_000", 00:16:58.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:58.206 "listen_address": { 00:16:58.206 "trtype": "TCP", 00:16:58.206 "adrfam": "IPv4", 00:16:58.206 "traddr": "10.0.0.2", 00:16:58.206 "trsvcid": "4420" 00:16:58.206 }, 00:16:58.206 "peer_address": { 00:16:58.206 "trtype": "TCP", 00:16:58.206 "adrfam": "IPv4", 00:16:58.206 "traddr": "10.0.0.1", 00:16:58.206 "trsvcid": "41690" 00:16:58.206 }, 00:16:58.206 "auth": { 00:16:58.206 "state": "completed", 00:16:58.206 "digest": "sha256", 00:16:58.207 "dhgroup": "ffdhe4096" 00:16:58.207 } 00:16:58.207 } 00:16:58.207 ]' 00:16:58.207 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.207 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.207 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.207 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:58.207 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.207 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.207 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.207 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.467 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:16:58.467 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:16:59.038 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.038 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:59.038 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.038 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.038 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.038 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.038 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.038 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:59.038 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:59.298 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:59.298 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.298 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.298 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:59.298 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:59.298 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.298 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.298 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.298 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.298 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.298 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.298 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.298 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.558 00:16:59.558 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.558 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.558 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.818 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.818 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.818 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.818 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.818 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.818 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.818 { 00:16:59.818 "cntlid": 33, 00:16:59.818 "qid": 0, 00:16:59.818 "state": "enabled", 00:16:59.818 "thread": "nvmf_tgt_poll_group_000", 00:16:59.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:59.818 "listen_address": { 00:16:59.818 "trtype": "TCP", 00:16:59.819 "adrfam": "IPv4", 00:16:59.819 "traddr": "10.0.0.2", 00:16:59.819 "trsvcid": "4420" 00:16:59.819 }, 00:16:59.819 "peer_address": { 00:16:59.819 "trtype": "TCP", 00:16:59.819 "adrfam": "IPv4", 00:16:59.819 "traddr": "10.0.0.1", 00:16:59.819 "trsvcid": "41712" 00:16:59.819 }, 00:16:59.819 "auth": { 00:16:59.819 "state": "completed", 00:16:59.819 "digest": "sha256", 00:16:59.819 "dhgroup": "ffdhe6144" 00:16:59.819 } 00:16:59.819 } 00:16:59.819 ]' 00:16:59.819 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.819 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.819 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.819 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:59.819 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.819 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.819 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.819 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.079 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:17:00.079 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:17:00.649 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.649 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:00.649 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.649 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.649 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.650 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.650 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:00.650 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:00.910 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:00.910 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.910 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:00.910 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:00.910 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:00.910 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.910 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.910 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.910 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.910 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.910 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.910 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.910 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.170 00:17:01.170 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.170 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.170 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.430 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.430 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.430 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.430 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.430 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.430 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.430 { 00:17:01.430 "cntlid": 35, 00:17:01.430 "qid": 0, 00:17:01.430 "state": "enabled", 00:17:01.431 "thread": "nvmf_tgt_poll_group_000", 00:17:01.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:01.431 "listen_address": { 00:17:01.431 "trtype": "TCP", 00:17:01.431 "adrfam": "IPv4", 00:17:01.431 "traddr": "10.0.0.2", 00:17:01.431 "trsvcid": "4420" 00:17:01.431 }, 00:17:01.431 "peer_address": { 00:17:01.431 "trtype": "TCP", 00:17:01.431 "adrfam": "IPv4", 00:17:01.431 "traddr": "10.0.0.1", 00:17:01.431 "trsvcid": "41730" 00:17:01.431 }, 00:17:01.431 "auth": { 00:17:01.431 "state": "completed", 00:17:01.431 "digest": "sha256", 00:17:01.431 "dhgroup": "ffdhe6144" 00:17:01.431 } 00:17:01.431 } 00:17:01.431 ]' 00:17:01.431 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.431 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.431 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.691 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:01.691 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.691 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.691 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.691 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.691 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:17:01.691 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.631 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.891 00:17:02.892 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.892 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.892 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.152 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.152 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.153 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.153 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.153 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.153 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.153 { 00:17:03.153 "cntlid": 37, 00:17:03.153 "qid": 0, 00:17:03.153 "state": "enabled", 00:17:03.153 "thread": "nvmf_tgt_poll_group_000", 00:17:03.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:03.153 "listen_address": { 00:17:03.153 "trtype": "TCP", 00:17:03.153 "adrfam": "IPv4", 00:17:03.153 "traddr": "10.0.0.2", 00:17:03.153 "trsvcid": "4420" 00:17:03.153 }, 00:17:03.153 "peer_address": { 00:17:03.153 "trtype": "TCP", 00:17:03.153 "adrfam": "IPv4", 00:17:03.153 "traddr": "10.0.0.1", 00:17:03.153 "trsvcid": "41754" 00:17:03.153 }, 00:17:03.153 "auth": { 00:17:03.153 "state": "completed", 00:17:03.153 "digest": "sha256", 00:17:03.153 "dhgroup": "ffdhe6144" 00:17:03.153 } 00:17:03.153 } 00:17:03.153 ]' 00:17:03.153 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.153 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.153 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.415 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.415 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.415 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.415 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.415 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.415 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:17:03.415 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:17:04.356 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.356 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:04.356 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.356 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.356 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.356 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.356 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:04.357 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:04.357 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:04.357 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.357 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:04.357 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:04.357 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:04.357 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.357 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:04.357 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.357 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.357 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.357 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:04.357 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.357 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.618 00:17:04.618 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.618 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.618 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.879 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.879 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.879 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.879 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.879 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.879 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.879 { 00:17:04.879 "cntlid": 39, 00:17:04.879 "qid": 0, 00:17:04.879 "state": "enabled", 00:17:04.879 "thread": "nvmf_tgt_poll_group_000", 00:17:04.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:04.879 "listen_address": { 00:17:04.879 "trtype": "TCP", 00:17:04.879 "adrfam": "IPv4", 00:17:04.879 "traddr": "10.0.0.2", 00:17:04.879 "trsvcid": "4420" 00:17:04.879 }, 00:17:04.879 "peer_address": { 00:17:04.879 "trtype": "TCP", 00:17:04.879 "adrfam": "IPv4", 00:17:04.879 "traddr": "10.0.0.1", 00:17:04.879 "trsvcid": "41774" 00:17:04.879 }, 00:17:04.879 "auth": { 00:17:04.879 "state": "completed", 00:17:04.879 "digest": "sha256", 00:17:04.879 "dhgroup": "ffdhe6144" 00:17:04.879 } 00:17:04.879 } 00:17:04.879 ]' 00:17:04.879 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.879 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.879 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.140 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:05.140 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.140 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.140 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.140 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.140 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:17:05.140 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:17:06.082 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.082 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:06.082 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.082 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.082 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.082 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.082 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.082 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:06.082 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:06.083 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:06.083 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.083 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:06.083 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:06.083 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:06.083 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.083 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.083 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.083 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.083 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.083 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.083 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.083 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.653 00:17:06.653 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.653 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.653 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.653 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.653 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.653 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.653 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.653 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.653 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.653 { 00:17:06.653 "cntlid": 41, 00:17:06.653 "qid": 0, 00:17:06.653 "state": "enabled", 00:17:06.653 "thread": "nvmf_tgt_poll_group_000", 00:17:06.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:06.653 "listen_address": { 00:17:06.653 "trtype": "TCP", 00:17:06.653 "adrfam": "IPv4", 00:17:06.653 "traddr": "10.0.0.2", 00:17:06.653 "trsvcid": "4420" 00:17:06.653 }, 00:17:06.653 "peer_address": { 00:17:06.653 "trtype": "TCP", 00:17:06.653 "adrfam": "IPv4", 00:17:06.653 "traddr": "10.0.0.1", 00:17:06.653 "trsvcid": "41790" 00:17:06.653 }, 00:17:06.653 "auth": { 00:17:06.653 "state": "completed", 00:17:06.653 "digest": "sha256", 00:17:06.653 "dhgroup": "ffdhe8192" 00:17:06.653 } 00:17:06.653 } 00:17:06.653 ]' 00:17:06.653 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.913 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.913 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.913 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:06.913 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.913 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.913 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.913 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.174 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:17:07.174 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:17:07.746 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.746 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:07.746 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.746 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.746 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.746 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.746 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:07.746 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:08.006 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:08.006 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.006 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:08.006 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:08.006 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:08.006 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.006 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.006 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.006 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.006 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.006 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.006 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.006 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.267 00:17:08.527 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.527 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.527 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.527 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.527 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.527 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.527 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.527 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.527 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.527 { 00:17:08.527 "cntlid": 43, 00:17:08.527 "qid": 0, 00:17:08.527 "state": "enabled", 00:17:08.527 "thread": "nvmf_tgt_poll_group_000", 00:17:08.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:08.527 "listen_address": { 00:17:08.527 "trtype": "TCP", 00:17:08.527 "adrfam": "IPv4", 00:17:08.527 "traddr": "10.0.0.2", 00:17:08.527 "trsvcid": "4420" 00:17:08.527 }, 00:17:08.527 "peer_address": { 00:17:08.527 "trtype": "TCP", 00:17:08.527 "adrfam": "IPv4", 00:17:08.527 "traddr": "10.0.0.1", 00:17:08.527 "trsvcid": "33650" 00:17:08.528 }, 00:17:08.528 "auth": { 00:17:08.528 "state": "completed", 00:17:08.528 "digest": "sha256", 00:17:08.528 "dhgroup": "ffdhe8192" 00:17:08.528 } 00:17:08.528 } 00:17:08.528 ]' 00:17:08.528 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.528 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.528 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.787 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:08.787 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.787 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.787 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.787 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.048 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:17:09.048 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:17:09.618 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.618 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:09.618 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.618 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.618 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.618 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.618 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:09.618 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:09.879 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:09.879 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.879 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:09.879 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:09.879 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:09.879 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.879 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.879 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.879 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.879 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.879 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.879 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.879 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.139 00:17:10.139 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.139 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.139 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.400 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.400 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.400 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.400 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.400 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.400 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.400 { 00:17:10.400 "cntlid": 45, 00:17:10.400 "qid": 0, 00:17:10.400 "state": "enabled", 00:17:10.400 "thread": "nvmf_tgt_poll_group_000", 00:17:10.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:10.400 "listen_address": { 00:17:10.400 "trtype": "TCP", 00:17:10.400 "adrfam": "IPv4", 00:17:10.400 "traddr": "10.0.0.2", 00:17:10.400 "trsvcid": "4420" 00:17:10.400 }, 00:17:10.400 "peer_address": { 00:17:10.400 "trtype": "TCP", 00:17:10.400 "adrfam": "IPv4", 00:17:10.400 "traddr": "10.0.0.1", 00:17:10.400 "trsvcid": "33680" 00:17:10.400 }, 00:17:10.400 "auth": { 00:17:10.400 "state": "completed", 00:17:10.400 "digest": "sha256", 00:17:10.400 "dhgroup": "ffdhe8192" 00:17:10.400 } 00:17:10.400 } 00:17:10.400 ]' 00:17:10.400 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.400 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.400 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.400 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:10.400 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.660 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.660 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.660 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.660 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:17:10.661 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:17:11.232 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.232 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:11.232 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.232 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.232 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.232 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.232 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.232 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.492 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:11.492 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.492 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:11.492 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:11.492 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:11.492 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.492 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:11.492 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.492 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.492 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.492 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.492 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.492 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.062 00:17:12.062 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.062 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.062 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.323 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.323 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.323 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.323 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.323 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.323 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.323 { 00:17:12.323 "cntlid": 47, 00:17:12.323 "qid": 0, 00:17:12.323 "state": "enabled", 00:17:12.323 "thread": "nvmf_tgt_poll_group_000", 00:17:12.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:12.323 "listen_address": { 00:17:12.323 "trtype": "TCP", 00:17:12.323 "adrfam": "IPv4", 00:17:12.323 "traddr": "10.0.0.2", 00:17:12.323 "trsvcid": "4420" 00:17:12.323 }, 00:17:12.323 "peer_address": { 00:17:12.323 "trtype": "TCP", 00:17:12.323 "adrfam": "IPv4", 00:17:12.323 "traddr": "10.0.0.1", 00:17:12.323 "trsvcid": "33710" 00:17:12.323 }, 00:17:12.323 "auth": { 00:17:12.323 "state": "completed", 00:17:12.323 "digest": "sha256", 00:17:12.323 "dhgroup": "ffdhe8192" 00:17:12.323 } 00:17:12.323 } 00:17:12.323 ]' 00:17:12.323 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.323 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.323 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.323 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.323 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.323 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.323 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.323 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.584 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:17:12.584 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:17:13.155 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.155 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:13.155 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.155 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.155 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.155 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:13.155 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.155 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.155 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:13.155 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:13.416 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:13.416 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.416 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.416 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:13.416 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:13.416 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.416 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.416 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.416 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.416 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.416 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.416 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.416 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.677 00:17:13.677 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.677 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.677 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.677 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.677 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.677 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.677 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.938 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.938 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.938 { 00:17:13.938 "cntlid": 49, 00:17:13.938 "qid": 0, 00:17:13.938 "state": "enabled", 00:17:13.938 "thread": "nvmf_tgt_poll_group_000", 00:17:13.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:13.938 "listen_address": { 00:17:13.938 "trtype": "TCP", 00:17:13.938 "adrfam": "IPv4", 00:17:13.938 "traddr": "10.0.0.2", 00:17:13.938 "trsvcid": "4420" 00:17:13.938 }, 00:17:13.938 "peer_address": { 00:17:13.938 "trtype": "TCP", 00:17:13.938 "adrfam": "IPv4", 00:17:13.938 "traddr": "10.0.0.1", 00:17:13.938 "trsvcid": "33726" 00:17:13.938 }, 00:17:13.938 "auth": { 00:17:13.938 "state": "completed", 00:17:13.938 "digest": "sha384", 00:17:13.938 "dhgroup": "null" 00:17:13.938 } 00:17:13.938 } 00:17:13.938 ]' 00:17:13.938 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.938 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.938 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.938 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:13.938 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.938 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.938 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.938 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.198 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:17:14.198 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:17:14.770 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.770 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:14.770 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.770 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.770 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.770 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.770 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:14.770 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:15.034 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:15.034 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.034 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.034 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:15.034 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:15.034 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.034 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.034 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.034 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.034 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.034 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.034 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.034 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.296 00:17:15.296 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.296 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.296 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.556 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.556 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.556 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.556 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.556 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.556 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.556 { 00:17:15.556 "cntlid": 51, 00:17:15.556 "qid": 0, 00:17:15.556 "state": "enabled", 00:17:15.556 "thread": "nvmf_tgt_poll_group_000", 00:17:15.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:15.556 "listen_address": { 00:17:15.556 "trtype": "TCP", 00:17:15.556 "adrfam": "IPv4", 00:17:15.556 "traddr": "10.0.0.2", 00:17:15.556 "trsvcid": "4420" 00:17:15.556 }, 00:17:15.556 "peer_address": { 00:17:15.556 "trtype": "TCP", 00:17:15.556 "adrfam": "IPv4", 00:17:15.556 "traddr": "10.0.0.1", 00:17:15.556 "trsvcid": "33752" 00:17:15.556 }, 00:17:15.557 "auth": { 00:17:15.557 "state": "completed", 00:17:15.557 "digest": "sha384", 00:17:15.557 "dhgroup": "null" 00:17:15.557 } 00:17:15.557 } 00:17:15.557 ]' 00:17:15.557 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.557 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.557 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.557 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:15.557 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.557 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.557 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.557 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.816 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:17:15.816 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:17:16.386 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.386 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:16.386 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.386 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.386 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.386 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.386 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:16.386 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:16.646 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:16.646 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.646 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.646 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:16.646 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:16.646 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.646 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.646 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.646 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.646 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.646 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.646 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.646 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.907 00:17:16.907 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.907 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.907 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.167 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.167 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.167 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.168 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.168 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.168 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.168 { 00:17:17.168 "cntlid": 53, 00:17:17.168 "qid": 0, 00:17:17.168 "state": "enabled", 00:17:17.168 "thread": "nvmf_tgt_poll_group_000", 00:17:17.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:17.168 "listen_address": { 00:17:17.168 "trtype": "TCP", 00:17:17.168 "adrfam": "IPv4", 00:17:17.168 "traddr": "10.0.0.2", 00:17:17.168 "trsvcid": "4420" 00:17:17.168 }, 00:17:17.168 "peer_address": { 00:17:17.168 "trtype": "TCP", 00:17:17.168 "adrfam": "IPv4", 00:17:17.168 "traddr": "10.0.0.1", 00:17:17.168 "trsvcid": "52430" 00:17:17.168 }, 00:17:17.168 "auth": { 00:17:17.168 "state": "completed", 00:17:17.168 "digest": "sha384", 00:17:17.168 "dhgroup": "null" 00:17:17.168 } 00:17:17.168 } 00:17:17.168 ]' 00:17:17.168 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.168 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.168 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.168 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:17.168 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.168 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.168 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.168 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.428 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:17:17.429 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:17:18.000 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.000 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:18.000 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.000 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.261 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.261 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.261 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:18.261 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:18.261 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:18.261 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.261 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.261 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:18.261 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:18.261 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.262 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:18.262 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.262 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.262 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.262 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:18.262 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.262 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.522 00:17:18.522 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.522 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.522 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.783 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.783 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.783 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.783 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.783 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.783 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.783 { 00:17:18.783 "cntlid": 55, 00:17:18.783 "qid": 0, 00:17:18.783 "state": "enabled", 00:17:18.783 "thread": "nvmf_tgt_poll_group_000", 00:17:18.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:18.783 "listen_address": { 00:17:18.783 "trtype": "TCP", 00:17:18.783 "adrfam": "IPv4", 00:17:18.783 "traddr": "10.0.0.2", 00:17:18.783 "trsvcid": "4420" 00:17:18.783 }, 00:17:18.783 "peer_address": { 00:17:18.783 "trtype": "TCP", 00:17:18.783 "adrfam": "IPv4", 00:17:18.783 "traddr": "10.0.0.1", 00:17:18.783 "trsvcid": "52466" 00:17:18.783 }, 00:17:18.783 "auth": { 00:17:18.783 "state": "completed", 00:17:18.783 "digest": "sha384", 00:17:18.783 "dhgroup": "null" 00:17:18.783 } 00:17:18.783 } 00:17:18.783 ]' 00:17:18.783 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.783 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.783 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.783 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:18.783 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.783 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.783 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.783 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.044 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:17:19.044 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:17:19.616 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.616 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:19.616 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.616 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.616 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.616 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.617 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.617 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.617 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.912 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:19.912 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.912 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.912 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:19.912 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:19.912 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.912 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.912 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.912 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.912 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.912 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.912 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.912 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.228 00:17:20.228 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.228 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.228 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.228 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.228 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.228 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.228 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.228 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.228 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.228 { 00:17:20.228 "cntlid": 57, 00:17:20.228 "qid": 0, 00:17:20.228 "state": "enabled", 00:17:20.228 "thread": "nvmf_tgt_poll_group_000", 00:17:20.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:20.228 "listen_address": { 00:17:20.228 "trtype": "TCP", 00:17:20.228 "adrfam": "IPv4", 00:17:20.228 "traddr": "10.0.0.2", 00:17:20.228 "trsvcid": "4420" 00:17:20.228 }, 00:17:20.228 "peer_address": { 00:17:20.228 "trtype": "TCP", 00:17:20.228 "adrfam": "IPv4", 00:17:20.228 "traddr": "10.0.0.1", 00:17:20.228 "trsvcid": "52496" 00:17:20.228 }, 00:17:20.228 "auth": { 00:17:20.228 "state": "completed", 00:17:20.228 "digest": "sha384", 00:17:20.228 "dhgroup": "ffdhe2048" 00:17:20.228 } 00:17:20.228 } 00:17:20.228 ]' 00:17:20.228 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.491 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.491 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.491 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:20.491 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.491 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.491 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.491 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.752 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:17:20.752 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:17:21.323 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.323 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:21.323 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.323 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.323 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.323 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.323 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:21.323 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:21.584 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:21.584 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.584 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.584 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:21.584 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:21.585 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.585 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.585 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.585 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.585 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.585 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.585 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.585 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.585 00:17:21.845 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.845 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.845 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.845 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.845 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.845 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.845 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.845 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.845 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.845 { 00:17:21.845 "cntlid": 59, 00:17:21.845 "qid": 0, 00:17:21.845 "state": "enabled", 00:17:21.845 "thread": "nvmf_tgt_poll_group_000", 00:17:21.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:21.845 "listen_address": { 00:17:21.845 "trtype": "TCP", 00:17:21.845 "adrfam": "IPv4", 00:17:21.845 "traddr": "10.0.0.2", 00:17:21.845 "trsvcid": "4420" 00:17:21.845 }, 00:17:21.845 "peer_address": { 00:17:21.845 "trtype": "TCP", 00:17:21.845 "adrfam": "IPv4", 00:17:21.846 "traddr": "10.0.0.1", 00:17:21.846 "trsvcid": "52506" 00:17:21.846 }, 00:17:21.846 "auth": { 00:17:21.846 "state": "completed", 00:17:21.846 "digest": "sha384", 00:17:21.846 "dhgroup": "ffdhe2048" 00:17:21.846 } 00:17:21.846 } 00:17:21.846 ]' 00:17:21.846 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.846 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.846 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.106 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:22.106 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.106 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.106 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.106 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.106 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:17:22.106 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.048 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.309 00:17:23.309 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.309 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.309 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.569 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.569 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.569 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.569 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.569 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.569 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.569 { 00:17:23.569 "cntlid": 61, 00:17:23.569 "qid": 0, 00:17:23.569 "state": "enabled", 00:17:23.569 "thread": "nvmf_tgt_poll_group_000", 00:17:23.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:23.569 "listen_address": { 00:17:23.569 "trtype": "TCP", 00:17:23.569 "adrfam": "IPv4", 00:17:23.569 "traddr": "10.0.0.2", 00:17:23.569 "trsvcid": "4420" 00:17:23.569 }, 00:17:23.569 "peer_address": { 00:17:23.569 "trtype": "TCP", 00:17:23.569 "adrfam": "IPv4", 00:17:23.569 "traddr": "10.0.0.1", 00:17:23.569 "trsvcid": "52528" 00:17:23.569 }, 00:17:23.569 "auth": { 00:17:23.569 "state": "completed", 00:17:23.569 "digest": "sha384", 00:17:23.569 "dhgroup": "ffdhe2048" 00:17:23.569 } 00:17:23.569 } 00:17:23.569 ]' 00:17:23.569 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.569 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.569 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.569 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:23.569 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.569 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.569 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.569 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.830 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:17:23.830 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:17:24.401 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.401 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:24.401 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.401 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.661 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.661 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.661 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.661 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.661 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:24.661 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.661 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.661 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:24.661 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:24.661 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.662 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:24.662 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.662 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.662 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.662 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:24.662 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.662 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.922 00:17:24.922 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.922 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.922 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.183 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.183 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.183 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.183 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.183 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.183 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.183 { 00:17:25.183 "cntlid": 63, 00:17:25.183 "qid": 0, 00:17:25.183 "state": "enabled", 00:17:25.183 "thread": "nvmf_tgt_poll_group_000", 00:17:25.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:25.183 "listen_address": { 00:17:25.183 "trtype": "TCP", 00:17:25.183 "adrfam": "IPv4", 00:17:25.183 "traddr": "10.0.0.2", 00:17:25.183 "trsvcid": "4420" 00:17:25.183 }, 00:17:25.183 "peer_address": { 00:17:25.183 "trtype": "TCP", 00:17:25.183 "adrfam": "IPv4", 00:17:25.183 "traddr": "10.0.0.1", 00:17:25.183 "trsvcid": "52572" 00:17:25.183 }, 00:17:25.183 "auth": { 00:17:25.183 "state": "completed", 00:17:25.183 "digest": "sha384", 00:17:25.183 "dhgroup": "ffdhe2048" 00:17:25.183 } 00:17:25.183 } 00:17:25.183 ]' 00:17:25.183 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.183 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.183 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.183 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:25.183 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.183 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.183 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.183 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.442 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:17:25.442 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:17:26.010 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.010 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:26.010 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.010 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.010 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.010 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.010 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.010 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.010 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:26.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:26.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:26.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.530 00:17:26.530 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.530 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.530 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.791 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.791 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.791 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.791 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.791 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.791 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.791 { 00:17:26.791 "cntlid": 65, 00:17:26.791 "qid": 0, 00:17:26.791 "state": "enabled", 00:17:26.791 "thread": "nvmf_tgt_poll_group_000", 00:17:26.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:26.791 "listen_address": { 00:17:26.791 "trtype": "TCP", 00:17:26.791 "adrfam": "IPv4", 00:17:26.791 "traddr": "10.0.0.2", 00:17:26.791 "trsvcid": "4420" 00:17:26.791 }, 00:17:26.791 "peer_address": { 00:17:26.791 "trtype": "TCP", 00:17:26.791 "adrfam": "IPv4", 00:17:26.791 "traddr": "10.0.0.1", 00:17:26.791 "trsvcid": "52590" 00:17:26.791 }, 00:17:26.791 "auth": { 00:17:26.791 "state": "completed", 00:17:26.791 "digest": "sha384", 00:17:26.791 "dhgroup": "ffdhe3072" 00:17:26.791 } 00:17:26.791 } 00:17:26.791 ]' 00:17:26.791 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.791 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.791 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.791 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:26.791 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.791 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.791 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.791 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.051 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:17:27.051 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:17:27.620 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.620 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:27.620 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.620 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.620 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.620 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.620 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:27.620 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:27.880 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:27.880 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.880 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.880 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:27.880 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:27.881 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.881 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.881 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.881 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.881 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.881 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.881 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.881 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.141 00:17:28.141 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.141 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.141 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.403 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.403 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.403 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.403 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.403 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.403 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.403 { 00:17:28.403 "cntlid": 67, 00:17:28.403 "qid": 0, 00:17:28.403 "state": "enabled", 00:17:28.403 "thread": "nvmf_tgt_poll_group_000", 00:17:28.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:28.403 "listen_address": { 00:17:28.403 "trtype": "TCP", 00:17:28.403 "adrfam": "IPv4", 00:17:28.403 "traddr": "10.0.0.2", 00:17:28.403 "trsvcid": "4420" 00:17:28.403 }, 00:17:28.403 "peer_address": { 00:17:28.403 "trtype": "TCP", 00:17:28.403 "adrfam": "IPv4", 00:17:28.403 "traddr": "10.0.0.1", 00:17:28.403 "trsvcid": "35536" 00:17:28.403 }, 00:17:28.403 "auth": { 00:17:28.403 "state": "completed", 00:17:28.403 "digest": "sha384", 00:17:28.403 "dhgroup": "ffdhe3072" 00:17:28.403 } 00:17:28.403 } 00:17:28.403 ]' 00:17:28.403 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.403 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.403 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.403 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:28.403 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.403 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.403 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.403 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.664 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:17:28.664 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:17:29.235 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.235 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:29.235 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.235 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.235 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.235 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.235 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:29.235 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:29.497 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:29.497 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.497 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.497 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:29.497 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:29.497 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.497 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.497 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.497 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.497 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.497 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.497 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.497 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.759 00:17:29.759 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.759 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.759 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.020 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.020 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.020 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.020 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.020 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.020 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.020 { 00:17:30.020 "cntlid": 69, 00:17:30.020 "qid": 0, 00:17:30.020 "state": "enabled", 00:17:30.020 "thread": "nvmf_tgt_poll_group_000", 00:17:30.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:30.020 "listen_address": { 00:17:30.020 "trtype": "TCP", 00:17:30.020 "adrfam": "IPv4", 00:17:30.020 "traddr": "10.0.0.2", 00:17:30.020 "trsvcid": "4420" 00:17:30.020 }, 00:17:30.020 "peer_address": { 00:17:30.020 "trtype": "TCP", 00:17:30.020 "adrfam": "IPv4", 00:17:30.020 "traddr": "10.0.0.1", 00:17:30.020 "trsvcid": "35562" 00:17:30.020 }, 00:17:30.020 "auth": { 00:17:30.020 "state": "completed", 00:17:30.020 "digest": "sha384", 00:17:30.020 "dhgroup": "ffdhe3072" 00:17:30.020 } 00:17:30.020 } 00:17:30.020 ]' 00:17:30.020 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.020 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.020 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.020 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:30.020 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.020 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.020 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.020 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.281 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:17:30.281 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:17:30.851 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.851 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:30.851 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.851 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.851 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.851 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.851 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:30.852 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:31.112 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:31.112 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.112 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.112 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:31.112 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:31.112 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.112 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:31.112 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.112 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.112 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.113 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:31.113 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.113 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.373 00:17:31.373 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.373 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.373 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.635 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.635 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.635 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.635 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.635 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.635 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.635 { 00:17:31.635 "cntlid": 71, 00:17:31.635 "qid": 0, 00:17:31.635 "state": "enabled", 00:17:31.635 "thread": "nvmf_tgt_poll_group_000", 00:17:31.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:31.635 "listen_address": { 00:17:31.635 "trtype": "TCP", 00:17:31.635 "adrfam": "IPv4", 00:17:31.635 "traddr": "10.0.0.2", 00:17:31.635 "trsvcid": "4420" 00:17:31.635 }, 00:17:31.635 "peer_address": { 00:17:31.635 "trtype": "TCP", 00:17:31.635 "adrfam": "IPv4", 00:17:31.635 "traddr": "10.0.0.1", 00:17:31.635 "trsvcid": "35574" 00:17:31.635 }, 00:17:31.635 "auth": { 00:17:31.635 "state": "completed", 00:17:31.635 "digest": "sha384", 00:17:31.635 "dhgroup": "ffdhe3072" 00:17:31.635 } 00:17:31.635 } 00:17:31.635 ]' 00:17:31.635 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.635 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.635 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.635 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:31.635 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.635 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.635 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.635 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.896 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:17:31.896 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:17:32.469 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.469 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:32.469 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.469 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.469 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.469 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.469 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.469 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.469 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.731 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:32.731 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.731 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.731 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:32.731 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:32.731 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.731 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.731 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.731 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.731 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.731 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.731 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.731 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.991 00:17:32.991 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.991 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.991 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.252 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.252 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.252 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.252 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.252 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.252 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.252 { 00:17:33.252 "cntlid": 73, 00:17:33.252 "qid": 0, 00:17:33.252 "state": "enabled", 00:17:33.252 "thread": "nvmf_tgt_poll_group_000", 00:17:33.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:33.252 "listen_address": { 00:17:33.252 "trtype": "TCP", 00:17:33.252 "adrfam": "IPv4", 00:17:33.252 "traddr": "10.0.0.2", 00:17:33.252 "trsvcid": "4420" 00:17:33.252 }, 00:17:33.252 "peer_address": { 00:17:33.252 "trtype": "TCP", 00:17:33.252 "adrfam": "IPv4", 00:17:33.252 "traddr": "10.0.0.1", 00:17:33.252 "trsvcid": "35606" 00:17:33.252 }, 00:17:33.252 "auth": { 00:17:33.252 "state": "completed", 00:17:33.252 "digest": "sha384", 00:17:33.252 "dhgroup": "ffdhe4096" 00:17:33.252 } 00:17:33.252 } 00:17:33.252 ]' 00:17:33.252 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.252 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.252 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.252 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:33.252 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.252 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.252 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.252 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.516 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:17:33.516 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:17:34.087 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.087 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:34.087 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.087 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.087 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.348 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.348 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:34.348 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:34.348 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:34.348 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.348 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:34.348 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:34.348 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:34.348 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.348 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.348 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.348 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.348 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.348 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.348 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.348 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.609 00:17:34.609 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.609 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.609 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.870 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.870 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.870 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.870 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.870 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.870 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.870 { 00:17:34.870 "cntlid": 75, 00:17:34.870 "qid": 0, 00:17:34.870 "state": "enabled", 00:17:34.870 "thread": "nvmf_tgt_poll_group_000", 00:17:34.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:34.870 "listen_address": { 00:17:34.870 "trtype": "TCP", 00:17:34.870 "adrfam": "IPv4", 00:17:34.870 "traddr": "10.0.0.2", 00:17:34.870 "trsvcid": "4420" 00:17:34.870 }, 00:17:34.870 "peer_address": { 00:17:34.870 "trtype": "TCP", 00:17:34.870 "adrfam": "IPv4", 00:17:34.870 "traddr": "10.0.0.1", 00:17:34.870 "trsvcid": "35640" 00:17:34.870 }, 00:17:34.870 "auth": { 00:17:34.870 "state": "completed", 00:17:34.870 "digest": "sha384", 00:17:34.870 "dhgroup": "ffdhe4096" 00:17:34.870 } 00:17:34.870 } 00:17:34.870 ]' 00:17:34.870 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.870 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.870 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.870 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:34.870 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.870 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.870 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.870 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.131 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:17:35.131 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:17:35.702 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.702 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:35.702 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.702 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.702 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.702 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.702 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:35.702 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:35.962 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:35.962 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.962 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.962 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:35.962 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:35.962 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.962 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.962 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.962 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.962 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.962 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.962 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.962 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.223 00:17:36.223 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.223 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.223 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.483 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.483 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.483 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.483 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.483 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.483 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.483 { 00:17:36.483 "cntlid": 77, 00:17:36.483 "qid": 0, 00:17:36.483 "state": "enabled", 00:17:36.483 "thread": "nvmf_tgt_poll_group_000", 00:17:36.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:36.483 "listen_address": { 00:17:36.483 "trtype": "TCP", 00:17:36.483 "adrfam": "IPv4", 00:17:36.483 "traddr": "10.0.0.2", 00:17:36.483 "trsvcid": "4420" 00:17:36.483 }, 00:17:36.483 "peer_address": { 00:17:36.483 "trtype": "TCP", 00:17:36.483 "adrfam": "IPv4", 00:17:36.483 "traddr": "10.0.0.1", 00:17:36.483 "trsvcid": "35660" 00:17:36.483 }, 00:17:36.483 "auth": { 00:17:36.483 "state": "completed", 00:17:36.483 "digest": "sha384", 00:17:36.483 "dhgroup": "ffdhe4096" 00:17:36.483 } 00:17:36.483 } 00:17:36.483 ]' 00:17:36.483 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.483 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.483 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.483 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:36.483 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.744 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.744 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.744 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.744 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:17:36.744 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:17:37.315 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.576 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.836 00:17:37.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.095 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.095 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.095 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.095 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.095 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.095 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.095 { 00:17:38.095 "cntlid": 79, 00:17:38.095 "qid": 0, 00:17:38.095 "state": "enabled", 00:17:38.095 "thread": "nvmf_tgt_poll_group_000", 00:17:38.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:38.095 "listen_address": { 00:17:38.095 "trtype": "TCP", 00:17:38.095 "adrfam": "IPv4", 00:17:38.095 "traddr": "10.0.0.2", 00:17:38.095 "trsvcid": "4420" 00:17:38.095 }, 00:17:38.095 "peer_address": { 00:17:38.095 "trtype": "TCP", 00:17:38.095 "adrfam": "IPv4", 00:17:38.095 "traddr": "10.0.0.1", 00:17:38.095 "trsvcid": "38848" 00:17:38.095 }, 00:17:38.095 "auth": { 00:17:38.095 "state": "completed", 00:17:38.095 "digest": "sha384", 00:17:38.095 "dhgroup": "ffdhe4096" 00:17:38.095 } 00:17:38.095 } 00:17:38.095 ]' 00:17:38.095 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.095 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.095 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.355 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:38.355 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.355 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.355 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.355 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.355 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:17:38.355 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:17:39.296 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.296 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:39.296 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.296 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.297 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.297 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.297 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.297 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:39.297 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:39.297 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:39.297 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.297 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.297 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:39.297 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:39.297 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.297 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.297 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.297 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.297 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.297 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.297 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.297 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.557 00:17:39.557 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.557 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.557 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.817 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.817 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.817 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.817 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.817 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.817 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.817 { 00:17:39.817 "cntlid": 81, 00:17:39.817 "qid": 0, 00:17:39.817 "state": "enabled", 00:17:39.817 "thread": "nvmf_tgt_poll_group_000", 00:17:39.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:39.817 "listen_address": { 00:17:39.817 "trtype": "TCP", 00:17:39.817 "adrfam": "IPv4", 00:17:39.817 "traddr": "10.0.0.2", 00:17:39.817 "trsvcid": "4420" 00:17:39.817 }, 00:17:39.817 "peer_address": { 00:17:39.817 "trtype": "TCP", 00:17:39.818 "adrfam": "IPv4", 00:17:39.818 "traddr": "10.0.0.1", 00:17:39.818 "trsvcid": "38882" 00:17:39.818 }, 00:17:39.818 "auth": { 00:17:39.818 "state": "completed", 00:17:39.818 "digest": "sha384", 00:17:39.818 "dhgroup": "ffdhe6144" 00:17:39.818 } 00:17:39.818 } 00:17:39.818 ]' 00:17:39.818 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.818 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.818 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.818 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:39.818 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.078 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.078 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.078 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.078 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:17:40.078 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:17:40.650 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.650 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:40.650 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.650 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.650 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.650 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.650 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.650 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.911 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:40.911 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.911 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.911 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:40.911 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:40.911 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.911 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.911 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.911 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.911 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.911 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.911 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.911 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.171 00:17:41.431 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.431 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.431 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.431 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.431 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.431 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.431 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.431 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.431 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.431 { 00:17:41.431 "cntlid": 83, 00:17:41.431 "qid": 0, 00:17:41.431 "state": "enabled", 00:17:41.431 "thread": "nvmf_tgt_poll_group_000", 00:17:41.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:41.431 "listen_address": { 00:17:41.431 "trtype": "TCP", 00:17:41.431 "adrfam": "IPv4", 00:17:41.431 "traddr": "10.0.0.2", 00:17:41.431 "trsvcid": "4420" 00:17:41.431 }, 00:17:41.431 "peer_address": { 00:17:41.431 "trtype": "TCP", 00:17:41.431 "adrfam": "IPv4", 00:17:41.431 "traddr": "10.0.0.1", 00:17:41.431 "trsvcid": "38920" 00:17:41.431 }, 00:17:41.431 "auth": { 00:17:41.431 "state": "completed", 00:17:41.431 "digest": "sha384", 00:17:41.431 "dhgroup": "ffdhe6144" 00:17:41.431 } 00:17:41.431 } 00:17:41.431 ]' 00:17:41.431 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.431 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.431 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.692 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:41.692 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.692 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.692 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.692 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.952 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:17:41.952 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:17:42.523 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.523 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:42.523 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.523 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.523 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.523 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.523 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.523 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.784 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:42.784 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.784 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.784 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:42.784 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:42.784 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.784 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.784 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.784 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.784 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.784 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.784 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.784 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.044 00:17:43.044 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.044 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.044 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.303 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.303 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.303 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.303 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.303 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.303 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.303 { 00:17:43.303 "cntlid": 85, 00:17:43.303 "qid": 0, 00:17:43.303 "state": "enabled", 00:17:43.303 "thread": "nvmf_tgt_poll_group_000", 00:17:43.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:43.303 "listen_address": { 00:17:43.303 "trtype": "TCP", 00:17:43.303 "adrfam": "IPv4", 00:17:43.303 "traddr": "10.0.0.2", 00:17:43.303 "trsvcid": "4420" 00:17:43.303 }, 00:17:43.303 "peer_address": { 00:17:43.303 "trtype": "TCP", 00:17:43.303 "adrfam": "IPv4", 00:17:43.303 "traddr": "10.0.0.1", 00:17:43.303 "trsvcid": "38954" 00:17:43.303 }, 00:17:43.303 "auth": { 00:17:43.303 "state": "completed", 00:17:43.303 "digest": "sha384", 00:17:43.303 "dhgroup": "ffdhe6144" 00:17:43.303 } 00:17:43.303 } 00:17:43.303 ]' 00:17:43.303 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.303 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.303 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.303 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:43.303 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.303 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.303 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.303 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.563 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:17:43.563 13:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:17:44.133 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.133 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:44.133 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.133 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.133 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.133 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.133 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.133 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.393 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:44.393 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.393 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:44.393 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:44.393 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:44.393 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.393 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:44.393 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.393 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.393 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.393 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:44.393 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.393 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.652 00:17:44.652 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.652 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.652 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.912 13:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.912 13:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.912 13:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.912 13:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.912 13:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.912 13:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.912 { 00:17:44.912 "cntlid": 87, 00:17:44.912 "qid": 0, 00:17:44.912 "state": "enabled", 00:17:44.912 "thread": "nvmf_tgt_poll_group_000", 00:17:44.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:44.912 "listen_address": { 00:17:44.912 "trtype": "TCP", 00:17:44.912 "adrfam": "IPv4", 00:17:44.912 "traddr": "10.0.0.2", 00:17:44.912 "trsvcid": "4420" 00:17:44.912 }, 00:17:44.912 "peer_address": { 00:17:44.912 "trtype": "TCP", 00:17:44.912 "adrfam": "IPv4", 00:17:44.912 "traddr": "10.0.0.1", 00:17:44.912 "trsvcid": "38982" 00:17:44.912 }, 00:17:44.912 "auth": { 00:17:44.912 "state": "completed", 00:17:44.912 "digest": "sha384", 00:17:44.912 "dhgroup": "ffdhe6144" 00:17:44.912 } 00:17:44.912 } 00:17:44.912 ]' 00:17:44.912 13:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.912 13:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.912 13:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.172 13:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:45.172 13:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.172 13:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.172 13:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.172 13:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.172 13:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:17:45.172 13:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.111 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.681 00:17:46.681 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.681 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.681 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.944 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.944 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.944 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.944 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.944 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.944 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.944 { 00:17:46.944 "cntlid": 89, 00:17:46.944 "qid": 0, 00:17:46.944 "state": "enabled", 00:17:46.944 "thread": "nvmf_tgt_poll_group_000", 00:17:46.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:46.944 "listen_address": { 00:17:46.944 "trtype": "TCP", 00:17:46.944 "adrfam": "IPv4", 00:17:46.944 "traddr": "10.0.0.2", 00:17:46.944 "trsvcid": "4420" 00:17:46.944 }, 00:17:46.944 "peer_address": { 00:17:46.944 "trtype": "TCP", 00:17:46.944 "adrfam": "IPv4", 00:17:46.944 "traddr": "10.0.0.1", 00:17:46.944 "trsvcid": "39018" 00:17:46.944 }, 00:17:46.944 "auth": { 00:17:46.944 "state": "completed", 00:17:46.944 "digest": "sha384", 00:17:46.944 "dhgroup": "ffdhe8192" 00:17:46.944 } 00:17:46.944 } 00:17:46.944 ]' 00:17:46.944 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.944 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.944 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.944 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:46.944 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.944 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.944 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.944 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.204 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:17:47.205 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:17:47.773 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.773 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:47.773 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.773 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.773 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.773 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.773 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:47.773 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:48.032 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:48.032 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.032 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:48.032 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:48.032 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:48.032 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.032 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.032 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.032 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.032 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.032 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.032 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.032 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.602 00:17:48.603 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.603 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.603 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.603 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.603 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.603 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.603 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.863 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.863 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.863 { 00:17:48.863 "cntlid": 91, 00:17:48.863 "qid": 0, 00:17:48.863 "state": "enabled", 00:17:48.863 "thread": "nvmf_tgt_poll_group_000", 00:17:48.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:48.863 "listen_address": { 00:17:48.863 "trtype": "TCP", 00:17:48.863 "adrfam": "IPv4", 00:17:48.863 "traddr": "10.0.0.2", 00:17:48.863 "trsvcid": "4420" 00:17:48.863 }, 00:17:48.863 "peer_address": { 00:17:48.863 "trtype": "TCP", 00:17:48.863 "adrfam": "IPv4", 00:17:48.863 "traddr": "10.0.0.1", 00:17:48.863 "trsvcid": "57364" 00:17:48.863 }, 00:17:48.863 "auth": { 00:17:48.863 "state": "completed", 00:17:48.863 "digest": "sha384", 00:17:48.863 "dhgroup": "ffdhe8192" 00:17:48.863 } 00:17:48.863 } 00:17:48.863 ]' 00:17:48.863 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.863 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.863 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.863 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:48.863 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.863 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.863 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.863 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.123 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:17:49.123 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:17:49.694 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.694 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:49.694 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.694 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.694 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.694 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.694 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:49.694 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:49.954 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:49.954 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.954 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:49.954 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:49.954 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:49.954 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.954 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.954 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.954 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.954 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.954 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.955 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.955 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.214 00:17:50.474 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.474 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.474 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.474 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.474 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.474 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.474 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.474 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.474 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.474 { 00:17:50.474 "cntlid": 93, 00:17:50.474 "qid": 0, 00:17:50.474 "state": "enabled", 00:17:50.474 "thread": "nvmf_tgt_poll_group_000", 00:17:50.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:50.474 "listen_address": { 00:17:50.474 "trtype": "TCP", 00:17:50.474 "adrfam": "IPv4", 00:17:50.474 "traddr": "10.0.0.2", 00:17:50.474 "trsvcid": "4420" 00:17:50.474 }, 00:17:50.474 "peer_address": { 00:17:50.474 "trtype": "TCP", 00:17:50.474 "adrfam": "IPv4", 00:17:50.475 "traddr": "10.0.0.1", 00:17:50.475 "trsvcid": "57388" 00:17:50.475 }, 00:17:50.475 "auth": { 00:17:50.475 "state": "completed", 00:17:50.475 "digest": "sha384", 00:17:50.475 "dhgroup": "ffdhe8192" 00:17:50.475 } 00:17:50.475 } 00:17:50.475 ]' 00:17:50.475 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.475 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.735 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.735 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.735 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.735 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.735 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.735 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.735 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:17:50.735 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.678 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.249 00:17:52.249 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.249 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.249 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.249 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.249 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.249 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.249 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.510 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.510 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.510 { 00:17:52.510 "cntlid": 95, 00:17:52.510 "qid": 0, 00:17:52.510 "state": "enabled", 00:17:52.510 "thread": "nvmf_tgt_poll_group_000", 00:17:52.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:52.510 "listen_address": { 00:17:52.510 "trtype": "TCP", 00:17:52.510 "adrfam": "IPv4", 00:17:52.510 "traddr": "10.0.0.2", 00:17:52.510 "trsvcid": "4420" 00:17:52.510 }, 00:17:52.510 "peer_address": { 00:17:52.510 "trtype": "TCP", 00:17:52.510 "adrfam": "IPv4", 00:17:52.510 "traddr": "10.0.0.1", 00:17:52.510 "trsvcid": "57404" 00:17:52.510 }, 00:17:52.510 "auth": { 00:17:52.510 "state": "completed", 00:17:52.510 "digest": "sha384", 00:17:52.510 "dhgroup": "ffdhe8192" 00:17:52.510 } 00:17:52.510 } 00:17:52.510 ]' 00:17:52.510 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.510 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.510 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.510 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:52.510 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.510 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.510 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.510 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.771 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:17:52.771 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:17:53.342 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.342 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:53.342 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.342 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.342 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.342 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:53.342 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.342 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.342 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:53.342 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:53.602 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:53.602 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.602 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.602 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:53.602 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:53.602 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.602 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.602 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.602 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.602 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.602 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.602 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.602 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.863 00:17:53.863 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.863 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.863 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.863 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.863 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.863 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.123 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.123 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.123 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.123 { 00:17:54.123 "cntlid": 97, 00:17:54.123 "qid": 0, 00:17:54.123 "state": "enabled", 00:17:54.123 "thread": "nvmf_tgt_poll_group_000", 00:17:54.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:54.123 "listen_address": { 00:17:54.123 "trtype": "TCP", 00:17:54.123 "adrfam": "IPv4", 00:17:54.123 "traddr": "10.0.0.2", 00:17:54.123 "trsvcid": "4420" 00:17:54.123 }, 00:17:54.123 "peer_address": { 00:17:54.123 "trtype": "TCP", 00:17:54.123 "adrfam": "IPv4", 00:17:54.123 "traddr": "10.0.0.1", 00:17:54.123 "trsvcid": "57434" 00:17:54.123 }, 00:17:54.123 "auth": { 00:17:54.123 "state": "completed", 00:17:54.123 "digest": "sha512", 00:17:54.123 "dhgroup": "null" 00:17:54.123 } 00:17:54.123 } 00:17:54.123 ]' 00:17:54.123 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.123 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.123 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.123 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:54.123 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.123 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.123 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.123 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.383 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:17:54.383 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:17:54.956 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.956 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:54.956 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.956 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.956 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.956 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.956 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:54.956 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:55.217 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:55.217 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.217 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.217 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:55.217 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:55.217 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.217 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.217 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.217 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.217 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.217 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.217 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.217 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.478 00:17:55.478 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.478 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.478 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.738 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.738 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.738 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.738 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.738 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.738 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.738 { 00:17:55.738 "cntlid": 99, 00:17:55.738 "qid": 0, 00:17:55.738 "state": "enabled", 00:17:55.738 "thread": "nvmf_tgt_poll_group_000", 00:17:55.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:55.738 "listen_address": { 00:17:55.738 "trtype": "TCP", 00:17:55.738 "adrfam": "IPv4", 00:17:55.738 "traddr": "10.0.0.2", 00:17:55.738 "trsvcid": "4420" 00:17:55.738 }, 00:17:55.738 "peer_address": { 00:17:55.738 "trtype": "TCP", 00:17:55.738 "adrfam": "IPv4", 00:17:55.738 "traddr": "10.0.0.1", 00:17:55.738 "trsvcid": "57458" 00:17:55.738 }, 00:17:55.738 "auth": { 00:17:55.738 "state": "completed", 00:17:55.738 "digest": "sha512", 00:17:55.738 "dhgroup": "null" 00:17:55.738 } 00:17:55.738 } 00:17:55.738 ]' 00:17:55.738 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.738 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.738 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.738 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:55.738 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.739 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.739 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.739 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.999 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:17:55.999 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:17:56.569 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.569 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:56.569 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.569 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.569 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.569 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.569 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:56.569 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:56.828 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:56.828 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.828 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.828 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:56.828 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:56.828 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.828 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.828 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.828 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.828 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.828 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.828 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.828 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.088 00:17:57.088 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.088 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.088 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.088 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.088 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.088 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.088 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.088 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.088 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.088 { 00:17:57.088 "cntlid": 101, 00:17:57.088 "qid": 0, 00:17:57.088 "state": "enabled", 00:17:57.088 "thread": "nvmf_tgt_poll_group_000", 00:17:57.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:57.088 "listen_address": { 00:17:57.088 "trtype": "TCP", 00:17:57.088 "adrfam": "IPv4", 00:17:57.088 "traddr": "10.0.0.2", 00:17:57.088 "trsvcid": "4420" 00:17:57.088 }, 00:17:57.088 "peer_address": { 00:17:57.088 "trtype": "TCP", 00:17:57.088 "adrfam": "IPv4", 00:17:57.088 "traddr": "10.0.0.1", 00:17:57.088 "trsvcid": "41912" 00:17:57.088 }, 00:17:57.088 "auth": { 00:17:57.088 "state": "completed", 00:17:57.088 "digest": "sha512", 00:17:57.088 "dhgroup": "null" 00:17:57.088 } 00:17:57.088 } 00:17:57.088 ]' 00:17:57.348 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.348 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.348 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.348 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:57.348 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.348 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.348 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.348 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.608 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:17:57.608 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:17:58.232 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.232 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:58.232 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.232 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.232 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.232 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.232 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:58.232 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:58.543 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:58.543 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.543 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.543 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:58.543 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:58.543 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.543 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:58.543 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.543 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.543 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.543 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:58.543 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.543 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.543 00:17:58.830 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.830 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.830 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.830 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.830 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.830 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.830 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.830 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.830 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.830 { 00:17:58.830 "cntlid": 103, 00:17:58.830 "qid": 0, 00:17:58.830 "state": "enabled", 00:17:58.830 "thread": "nvmf_tgt_poll_group_000", 00:17:58.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:58.830 "listen_address": { 00:17:58.830 "trtype": "TCP", 00:17:58.830 "adrfam": "IPv4", 00:17:58.830 "traddr": "10.0.0.2", 00:17:58.830 "trsvcid": "4420" 00:17:58.830 }, 00:17:58.830 "peer_address": { 00:17:58.830 "trtype": "TCP", 00:17:58.830 "adrfam": "IPv4", 00:17:58.830 "traddr": "10.0.0.1", 00:17:58.830 "trsvcid": "41946" 00:17:58.830 }, 00:17:58.830 "auth": { 00:17:58.830 "state": "completed", 00:17:58.830 "digest": "sha512", 00:17:58.830 "dhgroup": "null" 00:17:58.830 } 00:17:58.830 } 00:17:58.830 ]' 00:17:58.830 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.830 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.830 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.830 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:58.830 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.090 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.090 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.091 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.091 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:17:59.091 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:17:59.662 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.923 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:59.923 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.923 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.923 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.923 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.923 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.923 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:59.923 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:59.923 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:59.923 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.923 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.923 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:59.923 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:59.923 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.923 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.923 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.923 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.923 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.923 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.923 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.923 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.183 00:18:00.183 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.183 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.183 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.444 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.444 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.444 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.444 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.444 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.444 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.444 { 00:18:00.444 "cntlid": 105, 00:18:00.444 "qid": 0, 00:18:00.444 "state": "enabled", 00:18:00.444 "thread": "nvmf_tgt_poll_group_000", 00:18:00.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:00.444 "listen_address": { 00:18:00.444 "trtype": "TCP", 00:18:00.444 "adrfam": "IPv4", 00:18:00.444 "traddr": "10.0.0.2", 00:18:00.444 "trsvcid": "4420" 00:18:00.444 }, 00:18:00.444 "peer_address": { 00:18:00.444 "trtype": "TCP", 00:18:00.444 "adrfam": "IPv4", 00:18:00.444 "traddr": "10.0.0.1", 00:18:00.444 "trsvcid": "41986" 00:18:00.444 }, 00:18:00.444 "auth": { 00:18:00.444 "state": "completed", 00:18:00.444 "digest": "sha512", 00:18:00.444 "dhgroup": "ffdhe2048" 00:18:00.444 } 00:18:00.444 } 00:18:00.444 ]' 00:18:00.444 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.444 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.444 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.444 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:00.444 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.706 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.706 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.706 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.706 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:18:00.706 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.648 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.909 00:18:01.909 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.909 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.909 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.909 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.909 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.909 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.909 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.170 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.170 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.170 { 00:18:02.170 "cntlid": 107, 00:18:02.170 "qid": 0, 00:18:02.170 "state": "enabled", 00:18:02.170 "thread": "nvmf_tgt_poll_group_000", 00:18:02.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:02.170 "listen_address": { 00:18:02.170 "trtype": "TCP", 00:18:02.170 "adrfam": "IPv4", 00:18:02.170 "traddr": "10.0.0.2", 00:18:02.170 "trsvcid": "4420" 00:18:02.170 }, 00:18:02.170 "peer_address": { 00:18:02.170 "trtype": "TCP", 00:18:02.170 "adrfam": "IPv4", 00:18:02.170 "traddr": "10.0.0.1", 00:18:02.170 "trsvcid": "42024" 00:18:02.170 }, 00:18:02.170 "auth": { 00:18:02.170 "state": "completed", 00:18:02.171 "digest": "sha512", 00:18:02.171 "dhgroup": "ffdhe2048" 00:18:02.171 } 00:18:02.171 } 00:18:02.171 ]' 00:18:02.171 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.171 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.171 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.171 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:02.171 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.171 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.171 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.171 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.431 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:18:02.431 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:18:03.003 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.003 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:03.003 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.003 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.003 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.003 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.003 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:03.003 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:03.263 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:03.263 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.263 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.263 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:03.263 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:03.263 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.263 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.263 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.263 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.263 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.263 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.263 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.263 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.522 00:18:03.522 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.522 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.522 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.522 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.522 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.522 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.522 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.781 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.781 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.781 { 00:18:03.781 "cntlid": 109, 00:18:03.781 "qid": 0, 00:18:03.781 "state": "enabled", 00:18:03.781 "thread": "nvmf_tgt_poll_group_000", 00:18:03.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:03.781 "listen_address": { 00:18:03.781 "trtype": "TCP", 00:18:03.781 "adrfam": "IPv4", 00:18:03.781 "traddr": "10.0.0.2", 00:18:03.781 "trsvcid": "4420" 00:18:03.781 }, 00:18:03.781 "peer_address": { 00:18:03.781 "trtype": "TCP", 00:18:03.781 "adrfam": "IPv4", 00:18:03.781 "traddr": "10.0.0.1", 00:18:03.781 "trsvcid": "42052" 00:18:03.781 }, 00:18:03.781 "auth": { 00:18:03.782 "state": "completed", 00:18:03.782 "digest": "sha512", 00:18:03.782 "dhgroup": "ffdhe2048" 00:18:03.782 } 00:18:03.782 } 00:18:03.782 ]' 00:18:03.782 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.782 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.782 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.782 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:03.782 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.782 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.782 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.782 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.042 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:18:04.042 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:18:04.612 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.612 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:04.612 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.612 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.612 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.612 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.612 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:04.612 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:04.872 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:04.872 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.872 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.872 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:04.872 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:04.872 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.872 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:04.872 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.873 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.873 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.873 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:04.873 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.873 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:05.133 00:18:05.133 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.133 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.133 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.133 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.133 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.133 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.133 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.133 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.133 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.133 { 00:18:05.133 "cntlid": 111, 00:18:05.133 "qid": 0, 00:18:05.133 "state": "enabled", 00:18:05.133 "thread": "nvmf_tgt_poll_group_000", 00:18:05.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:05.133 "listen_address": { 00:18:05.133 "trtype": "TCP", 00:18:05.133 "adrfam": "IPv4", 00:18:05.133 "traddr": "10.0.0.2", 00:18:05.133 "trsvcid": "4420" 00:18:05.133 }, 00:18:05.133 "peer_address": { 00:18:05.133 "trtype": "TCP", 00:18:05.133 "adrfam": "IPv4", 00:18:05.133 "traddr": "10.0.0.1", 00:18:05.133 "trsvcid": "42080" 00:18:05.133 }, 00:18:05.133 "auth": { 00:18:05.133 "state": "completed", 00:18:05.133 "digest": "sha512", 00:18:05.133 "dhgroup": "ffdhe2048" 00:18:05.133 } 00:18:05.133 } 00:18:05.133 ]' 00:18:05.133 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.394 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.394 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.394 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:05.394 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.394 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.394 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.394 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.653 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:18:05.653 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:18:06.224 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.224 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:06.224 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.224 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.224 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.224 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.225 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.225 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:06.225 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:06.487 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:06.487 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.487 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.487 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:06.487 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:06.487 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.487 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.487 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.487 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.487 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.487 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.487 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.487 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.748 00:18:06.748 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.748 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.748 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.748 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.748 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.748 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.748 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.748 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.748 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.748 { 00:18:06.748 "cntlid": 113, 00:18:06.748 "qid": 0, 00:18:06.748 "state": "enabled", 00:18:06.748 "thread": "nvmf_tgt_poll_group_000", 00:18:06.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:06.748 "listen_address": { 00:18:06.748 "trtype": "TCP", 00:18:06.748 "adrfam": "IPv4", 00:18:06.748 "traddr": "10.0.0.2", 00:18:06.748 "trsvcid": "4420" 00:18:06.748 }, 00:18:06.748 "peer_address": { 00:18:06.748 "trtype": "TCP", 00:18:06.748 "adrfam": "IPv4", 00:18:06.748 "traddr": "10.0.0.1", 00:18:06.748 "trsvcid": "42116" 00:18:06.748 }, 00:18:06.748 "auth": { 00:18:06.748 "state": "completed", 00:18:06.748 "digest": "sha512", 00:18:06.748 "dhgroup": "ffdhe3072" 00:18:06.748 } 00:18:06.748 } 00:18:06.748 ]' 00:18:06.748 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.748 13:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.748 13:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.009 13:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:07.009 13:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.009 13:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.009 13:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.009 13:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.270 13:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:18:07.270 13:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:18:07.845 13:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.845 13:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:07.845 13:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.845 13:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.845 13:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.845 13:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.845 13:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:07.845 13:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:08.105 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:08.105 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.105 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.105 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:08.105 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:08.105 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.105 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.105 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.105 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.105 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.105 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.105 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.105 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.366 00:18:08.366 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.366 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.366 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.366 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.366 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.366 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.366 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.366 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.366 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.366 { 00:18:08.366 "cntlid": 115, 00:18:08.366 "qid": 0, 00:18:08.366 "state": "enabled", 00:18:08.366 "thread": "nvmf_tgt_poll_group_000", 00:18:08.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:08.366 "listen_address": { 00:18:08.366 "trtype": "TCP", 00:18:08.366 "adrfam": "IPv4", 00:18:08.366 "traddr": "10.0.0.2", 00:18:08.366 "trsvcid": "4420" 00:18:08.366 }, 00:18:08.366 "peer_address": { 00:18:08.366 "trtype": "TCP", 00:18:08.366 "adrfam": "IPv4", 00:18:08.367 "traddr": "10.0.0.1", 00:18:08.367 "trsvcid": "36360" 00:18:08.367 }, 00:18:08.367 "auth": { 00:18:08.367 "state": "completed", 00:18:08.367 "digest": "sha512", 00:18:08.367 "dhgroup": "ffdhe3072" 00:18:08.367 } 00:18:08.367 } 00:18:08.367 ]' 00:18:08.367 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.628 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.628 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.628 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:08.628 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.628 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.628 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.628 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.889 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:18:08.889 13:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:18:09.459 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.459 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:09.459 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.459 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.459 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.459 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.459 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:09.459 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:09.720 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:09.720 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.720 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.720 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:09.720 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:09.720 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.720 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.720 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.720 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.720 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.720 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.720 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.720 13:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.981 00:18:09.981 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.981 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.981 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.981 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.981 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.981 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.981 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.981 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.981 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.981 { 00:18:09.981 "cntlid": 117, 00:18:09.982 "qid": 0, 00:18:09.982 "state": "enabled", 00:18:09.982 "thread": "nvmf_tgt_poll_group_000", 00:18:09.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:09.982 "listen_address": { 00:18:09.982 "trtype": "TCP", 00:18:09.982 "adrfam": "IPv4", 00:18:09.982 "traddr": "10.0.0.2", 00:18:09.982 "trsvcid": "4420" 00:18:09.982 }, 00:18:09.982 "peer_address": { 00:18:09.982 "trtype": "TCP", 00:18:09.982 "adrfam": "IPv4", 00:18:09.982 "traddr": "10.0.0.1", 00:18:09.982 "trsvcid": "36388" 00:18:09.982 }, 00:18:09.982 "auth": { 00:18:09.982 "state": "completed", 00:18:09.982 "digest": "sha512", 00:18:09.982 "dhgroup": "ffdhe3072" 00:18:09.982 } 00:18:09.982 } 00:18:09.982 ]' 00:18:09.982 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.243 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.243 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.243 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.243 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.243 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.243 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.243 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.503 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:18:10.503 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:18:11.074 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.074 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:11.074 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.074 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.074 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.074 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.074 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:11.074 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:11.335 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:11.335 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.335 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.335 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:11.335 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:11.335 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.335 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:11.335 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.335 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.335 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.335 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:11.335 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.335 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.335 00:18:11.596 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.596 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.596 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.596 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.596 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.596 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.596 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.596 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.596 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.596 { 00:18:11.596 "cntlid": 119, 00:18:11.596 "qid": 0, 00:18:11.596 "state": "enabled", 00:18:11.596 "thread": "nvmf_tgt_poll_group_000", 00:18:11.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:11.596 "listen_address": { 00:18:11.596 "trtype": "TCP", 00:18:11.596 "adrfam": "IPv4", 00:18:11.596 "traddr": "10.0.0.2", 00:18:11.596 "trsvcid": "4420" 00:18:11.596 }, 00:18:11.596 "peer_address": { 00:18:11.596 "trtype": "TCP", 00:18:11.596 "adrfam": "IPv4", 00:18:11.596 "traddr": "10.0.0.1", 00:18:11.596 "trsvcid": "36412" 00:18:11.596 }, 00:18:11.596 "auth": { 00:18:11.596 "state": "completed", 00:18:11.596 "digest": "sha512", 00:18:11.596 "dhgroup": "ffdhe3072" 00:18:11.596 } 00:18:11.596 } 00:18:11.596 ]' 00:18:11.596 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.857 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.857 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.857 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:11.857 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.857 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.857 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.857 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.118 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:18:12.118 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:18:12.687 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.687 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:12.687 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.687 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.687 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.687 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.687 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.687 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:12.687 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:12.947 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:12.947 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.947 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.947 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:12.947 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:12.947 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.947 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.947 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.947 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.947 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.947 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.947 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.947 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.207 00:18:13.207 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.207 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.207 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.207 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.207 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.207 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.207 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.207 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.207 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.207 { 00:18:13.207 "cntlid": 121, 00:18:13.207 "qid": 0, 00:18:13.207 "state": "enabled", 00:18:13.207 "thread": "nvmf_tgt_poll_group_000", 00:18:13.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:13.207 "listen_address": { 00:18:13.207 "trtype": "TCP", 00:18:13.207 "adrfam": "IPv4", 00:18:13.207 "traddr": "10.0.0.2", 00:18:13.207 "trsvcid": "4420" 00:18:13.207 }, 00:18:13.207 "peer_address": { 00:18:13.207 "trtype": "TCP", 00:18:13.207 "adrfam": "IPv4", 00:18:13.207 "traddr": "10.0.0.1", 00:18:13.207 "trsvcid": "36450" 00:18:13.207 }, 00:18:13.207 "auth": { 00:18:13.207 "state": "completed", 00:18:13.207 "digest": "sha512", 00:18:13.207 "dhgroup": "ffdhe4096" 00:18:13.207 } 00:18:13.207 } 00:18:13.207 ]' 00:18:13.207 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.468 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.468 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.468 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:13.468 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.468 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.468 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.468 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.728 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:18:13.728 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:18:14.298 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.298 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:14.298 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.298 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.298 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.298 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.298 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.298 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.559 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:14.559 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.559 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.559 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:14.559 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:14.559 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.560 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.560 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.560 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.560 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.560 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.560 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.560 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.820 00:18:14.820 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.820 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.820 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.082 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.082 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.082 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.082 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.082 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.082 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.082 { 00:18:15.082 "cntlid": 123, 00:18:15.082 "qid": 0, 00:18:15.082 "state": "enabled", 00:18:15.082 "thread": "nvmf_tgt_poll_group_000", 00:18:15.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:15.082 "listen_address": { 00:18:15.082 "trtype": "TCP", 00:18:15.082 "adrfam": "IPv4", 00:18:15.082 "traddr": "10.0.0.2", 00:18:15.082 "trsvcid": "4420" 00:18:15.082 }, 00:18:15.082 "peer_address": { 00:18:15.082 "trtype": "TCP", 00:18:15.082 "adrfam": "IPv4", 00:18:15.082 "traddr": "10.0.0.1", 00:18:15.082 "trsvcid": "36490" 00:18:15.082 }, 00:18:15.082 "auth": { 00:18:15.082 "state": "completed", 00:18:15.082 "digest": "sha512", 00:18:15.082 "dhgroup": "ffdhe4096" 00:18:15.082 } 00:18:15.082 } 00:18:15.082 ]' 00:18:15.082 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.082 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.082 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.082 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:15.082 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.082 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.082 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.082 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.343 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:18:15.343 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:18:15.915 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.915 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:15.915 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.915 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.915 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.915 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.915 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:15.915 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:16.177 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:16.177 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.177 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.177 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:16.177 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:16.177 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.177 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.177 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.177 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.177 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.177 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.177 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.177 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.437 00:18:16.437 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.437 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.437 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.698 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.698 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.698 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.698 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.698 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.698 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.698 { 00:18:16.698 "cntlid": 125, 00:18:16.698 "qid": 0, 00:18:16.698 "state": "enabled", 00:18:16.698 "thread": "nvmf_tgt_poll_group_000", 00:18:16.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:16.698 "listen_address": { 00:18:16.698 "trtype": "TCP", 00:18:16.698 "adrfam": "IPv4", 00:18:16.698 "traddr": "10.0.0.2", 00:18:16.698 "trsvcid": "4420" 00:18:16.698 }, 00:18:16.698 "peer_address": { 00:18:16.698 "trtype": "TCP", 00:18:16.698 "adrfam": "IPv4", 00:18:16.698 "traddr": "10.0.0.1", 00:18:16.698 "trsvcid": "36510" 00:18:16.698 }, 00:18:16.698 "auth": { 00:18:16.698 "state": "completed", 00:18:16.698 "digest": "sha512", 00:18:16.698 "dhgroup": "ffdhe4096" 00:18:16.698 } 00:18:16.698 } 00:18:16.698 ]' 00:18:16.698 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.698 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.698 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.698 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:16.698 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.698 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.698 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.699 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.958 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:18:16.958 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:18:17.528 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.528 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:17.528 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.528 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.528 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.528 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.528 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:17.528 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:17.788 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:17.788 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.788 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.788 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:17.788 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:17.788 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.788 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:17.788 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.788 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.788 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.788 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:17.788 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.789 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.049 00:18:18.049 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.049 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.049 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.309 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.309 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.309 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.309 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.309 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.309 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.309 { 00:18:18.309 "cntlid": 127, 00:18:18.309 "qid": 0, 00:18:18.309 "state": "enabled", 00:18:18.309 "thread": "nvmf_tgt_poll_group_000", 00:18:18.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:18.309 "listen_address": { 00:18:18.309 "trtype": "TCP", 00:18:18.309 "adrfam": "IPv4", 00:18:18.309 "traddr": "10.0.0.2", 00:18:18.309 "trsvcid": "4420" 00:18:18.309 }, 00:18:18.309 "peer_address": { 00:18:18.309 "trtype": "TCP", 00:18:18.309 "adrfam": "IPv4", 00:18:18.309 "traddr": "10.0.0.1", 00:18:18.309 "trsvcid": "42800" 00:18:18.309 }, 00:18:18.309 "auth": { 00:18:18.309 "state": "completed", 00:18:18.309 "digest": "sha512", 00:18:18.309 "dhgroup": "ffdhe4096" 00:18:18.309 } 00:18:18.309 } 00:18:18.310 ]' 00:18:18.310 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.310 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.310 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.310 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:18.310 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.310 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.310 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.310 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.570 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:18:18.571 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:18:19.142 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.142 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:19.142 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.142 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.142 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.142 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.142 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.142 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:19.142 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:19.403 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:19.403 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.403 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.403 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:19.403 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:19.403 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.403 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.403 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.403 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.403 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.403 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.403 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.403 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.664 00:18:19.664 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.664 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.664 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.924 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.924 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.924 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.924 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.924 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.924 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.924 { 00:18:19.924 "cntlid": 129, 00:18:19.924 "qid": 0, 00:18:19.924 "state": "enabled", 00:18:19.924 "thread": "nvmf_tgt_poll_group_000", 00:18:19.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:19.924 "listen_address": { 00:18:19.924 "trtype": "TCP", 00:18:19.924 "adrfam": "IPv4", 00:18:19.924 "traddr": "10.0.0.2", 00:18:19.924 "trsvcid": "4420" 00:18:19.924 }, 00:18:19.924 "peer_address": { 00:18:19.924 "trtype": "TCP", 00:18:19.924 "adrfam": "IPv4", 00:18:19.924 "traddr": "10.0.0.1", 00:18:19.924 "trsvcid": "42840" 00:18:19.924 }, 00:18:19.924 "auth": { 00:18:19.924 "state": "completed", 00:18:19.924 "digest": "sha512", 00:18:19.924 "dhgroup": "ffdhe6144" 00:18:19.924 } 00:18:19.924 } 00:18:19.924 ]' 00:18:19.924 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.924 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.924 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.924 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:19.924 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.186 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.186 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.186 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.186 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:18:20.186 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:18:20.757 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.017 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:21.017 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.017 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.017 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.017 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.018 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:21.018 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:21.018 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:21.018 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.018 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:21.018 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:21.018 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:21.018 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.018 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.018 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.018 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.018 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.018 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.018 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.018 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.589 00:18:21.589 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.589 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.589 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.589 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.589 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.589 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.589 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.589 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.589 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.589 { 00:18:21.589 "cntlid": 131, 00:18:21.589 "qid": 0, 00:18:21.589 "state": "enabled", 00:18:21.589 "thread": "nvmf_tgt_poll_group_000", 00:18:21.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:21.589 "listen_address": { 00:18:21.589 "trtype": "TCP", 00:18:21.589 "adrfam": "IPv4", 00:18:21.589 "traddr": "10.0.0.2", 00:18:21.589 "trsvcid": "4420" 00:18:21.589 }, 00:18:21.589 "peer_address": { 00:18:21.589 "trtype": "TCP", 00:18:21.589 "adrfam": "IPv4", 00:18:21.589 "traddr": "10.0.0.1", 00:18:21.589 "trsvcid": "42876" 00:18:21.589 }, 00:18:21.589 "auth": { 00:18:21.589 "state": "completed", 00:18:21.589 "digest": "sha512", 00:18:21.589 "dhgroup": "ffdhe6144" 00:18:21.589 } 00:18:21.589 } 00:18:21.589 ]' 00:18:21.589 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.589 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.589 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.850 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:21.850 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.850 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.850 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.850 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.850 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:18:21.850 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.790 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.050 00:18:23.050 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.050 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.050 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.311 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.311 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.311 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.311 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.311 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.311 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.311 { 00:18:23.311 "cntlid": 133, 00:18:23.311 "qid": 0, 00:18:23.311 "state": "enabled", 00:18:23.311 "thread": "nvmf_tgt_poll_group_000", 00:18:23.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:23.311 "listen_address": { 00:18:23.311 "trtype": "TCP", 00:18:23.311 "adrfam": "IPv4", 00:18:23.311 "traddr": "10.0.0.2", 00:18:23.311 "trsvcid": "4420" 00:18:23.311 }, 00:18:23.311 "peer_address": { 00:18:23.311 "trtype": "TCP", 00:18:23.311 "adrfam": "IPv4", 00:18:23.311 "traddr": "10.0.0.1", 00:18:23.311 "trsvcid": "42906" 00:18:23.311 }, 00:18:23.311 "auth": { 00:18:23.311 "state": "completed", 00:18:23.311 "digest": "sha512", 00:18:23.311 "dhgroup": "ffdhe6144" 00:18:23.311 } 00:18:23.311 } 00:18:23.311 ]' 00:18:23.311 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.311 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.311 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.571 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:23.571 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.571 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.571 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.571 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.571 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:18:23.571 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:18:24.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:24.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.403 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.403 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.403 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:24.403 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:24.403 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:24.403 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.403 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.403 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:24.403 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:24.403 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.403 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:24.403 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.403 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.403 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.403 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:24.403 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.403 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.663 00:18:24.923 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.923 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.924 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.924 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.924 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.924 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.924 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.924 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.924 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.924 { 00:18:24.924 "cntlid": 135, 00:18:24.924 "qid": 0, 00:18:24.924 "state": "enabled", 00:18:24.924 "thread": "nvmf_tgt_poll_group_000", 00:18:24.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:24.924 "listen_address": { 00:18:24.924 "trtype": "TCP", 00:18:24.924 "adrfam": "IPv4", 00:18:24.924 "traddr": "10.0.0.2", 00:18:24.924 "trsvcid": "4420" 00:18:24.924 }, 00:18:24.924 "peer_address": { 00:18:24.924 "trtype": "TCP", 00:18:24.924 "adrfam": "IPv4", 00:18:24.924 "traddr": "10.0.0.1", 00:18:24.924 "trsvcid": "42928" 00:18:24.924 }, 00:18:24.924 "auth": { 00:18:24.924 "state": "completed", 00:18:24.924 "digest": "sha512", 00:18:24.924 "dhgroup": "ffdhe6144" 00:18:24.924 } 00:18:24.924 } 00:18:24.924 ]' 00:18:24.924 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.924 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.924 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.184 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:25.184 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.184 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.184 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.184 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.184 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:18:25.184 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.127 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.699 00:18:26.699 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.699 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.699 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.959 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.959 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.959 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.959 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.959 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.960 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.960 { 00:18:26.960 "cntlid": 137, 00:18:26.960 "qid": 0, 00:18:26.960 "state": "enabled", 00:18:26.960 "thread": "nvmf_tgt_poll_group_000", 00:18:26.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:26.960 "listen_address": { 00:18:26.960 "trtype": "TCP", 00:18:26.960 "adrfam": "IPv4", 00:18:26.960 "traddr": "10.0.0.2", 00:18:26.960 "trsvcid": "4420" 00:18:26.960 }, 00:18:26.960 "peer_address": { 00:18:26.960 "trtype": "TCP", 00:18:26.960 "adrfam": "IPv4", 00:18:26.960 "traddr": "10.0.0.1", 00:18:26.960 "trsvcid": "42956" 00:18:26.960 }, 00:18:26.960 "auth": { 00:18:26.960 "state": "completed", 00:18:26.960 "digest": "sha512", 00:18:26.960 "dhgroup": "ffdhe8192" 00:18:26.960 } 00:18:26.960 } 00:18:26.960 ]' 00:18:26.960 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.960 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.960 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.960 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:26.960 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.960 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.960 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.960 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.220 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:18:27.220 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:18:27.791 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.791 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:27.791 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.791 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.791 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.791 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.791 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.791 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:28.051 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:28.051 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.051 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.051 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:28.051 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:28.051 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.051 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.051 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.051 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.051 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.051 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.051 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.051 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.623 00:18:28.623 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.623 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.623 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.623 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.623 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.623 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.623 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.623 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.623 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.623 { 00:18:28.623 "cntlid": 139, 00:18:28.623 "qid": 0, 00:18:28.623 "state": "enabled", 00:18:28.623 "thread": "nvmf_tgt_poll_group_000", 00:18:28.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:28.623 "listen_address": { 00:18:28.623 "trtype": "TCP", 00:18:28.623 "adrfam": "IPv4", 00:18:28.623 "traddr": "10.0.0.2", 00:18:28.623 "trsvcid": "4420" 00:18:28.623 }, 00:18:28.623 "peer_address": { 00:18:28.623 "trtype": "TCP", 00:18:28.623 "adrfam": "IPv4", 00:18:28.623 "traddr": "10.0.0.1", 00:18:28.623 "trsvcid": "38706" 00:18:28.623 }, 00:18:28.623 "auth": { 00:18:28.623 "state": "completed", 00:18:28.623 "digest": "sha512", 00:18:28.623 "dhgroup": "ffdhe8192" 00:18:28.623 } 00:18:28.623 } 00:18:28.623 ]' 00:18:28.623 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.623 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.623 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.884 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.884 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.884 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.884 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.884 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.145 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:18:29.145 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: --dhchap-ctrl-secret DHHC-1:02:ODhiNDg1N2FlYmZiZjE4MWZhNzZmYmMyNjQzMDNjMTg0NjMwNzhmMmQwZTJhOGFk0bSaqA==: 00:18:29.716 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.716 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:29.716 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.716 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.716 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.716 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.716 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:29.716 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:29.977 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:29.977 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.977 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:29.977 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:29.977 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:29.977 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.977 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.977 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.977 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.977 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.977 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.977 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.977 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.238 00:18:30.238 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.238 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.238 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.499 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.499 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.499 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.499 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.499 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.499 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.499 { 00:18:30.499 "cntlid": 141, 00:18:30.499 "qid": 0, 00:18:30.499 "state": "enabled", 00:18:30.499 "thread": "nvmf_tgt_poll_group_000", 00:18:30.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:30.499 "listen_address": { 00:18:30.499 "trtype": "TCP", 00:18:30.499 "adrfam": "IPv4", 00:18:30.499 "traddr": "10.0.0.2", 00:18:30.499 "trsvcid": "4420" 00:18:30.499 }, 00:18:30.499 "peer_address": { 00:18:30.499 "trtype": "TCP", 00:18:30.499 "adrfam": "IPv4", 00:18:30.499 "traddr": "10.0.0.1", 00:18:30.499 "trsvcid": "38728" 00:18:30.499 }, 00:18:30.499 "auth": { 00:18:30.499 "state": "completed", 00:18:30.499 "digest": "sha512", 00:18:30.499 "dhgroup": "ffdhe8192" 00:18:30.499 } 00:18:30.499 } 00:18:30.499 ]' 00:18:30.499 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.499 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.499 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.499 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.499 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.758 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.758 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.758 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.758 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:18:30.758 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:01:YTZlZTZmNGI2MjhmMzg0YWMwMmIwOGRkMjJiMTAwYjH9+xy1: 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.698 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:32.271 00:18:32.271 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.271 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.271 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.271 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.271 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.271 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.271 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.271 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.271 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.271 { 00:18:32.271 "cntlid": 143, 00:18:32.271 "qid": 0, 00:18:32.271 "state": "enabled", 00:18:32.271 "thread": "nvmf_tgt_poll_group_000", 00:18:32.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:32.271 "listen_address": { 00:18:32.271 "trtype": "TCP", 00:18:32.271 "adrfam": "IPv4", 00:18:32.271 "traddr": "10.0.0.2", 00:18:32.271 "trsvcid": "4420" 00:18:32.271 }, 00:18:32.271 "peer_address": { 00:18:32.271 "trtype": "TCP", 00:18:32.271 "adrfam": "IPv4", 00:18:32.271 "traddr": "10.0.0.1", 00:18:32.271 "trsvcid": "38744" 00:18:32.271 }, 00:18:32.271 "auth": { 00:18:32.271 "state": "completed", 00:18:32.271 "digest": "sha512", 00:18:32.271 "dhgroup": "ffdhe8192" 00:18:32.271 } 00:18:32.271 } 00:18:32.271 ]' 00:18:32.271 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.532 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.532 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.532 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:32.532 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.532 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.532 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.532 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.793 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:18:32.793 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:18:33.364 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.364 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:33.364 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.364 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.364 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.364 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:33.364 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:33.364 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:33.364 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:33.364 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:33.364 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:33.624 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:33.624 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.624 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:33.624 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:33.624 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:33.624 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.624 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.624 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.624 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.624 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.624 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.624 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.625 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.196 00:18:34.196 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.196 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.196 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.196 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.196 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.196 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.196 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.196 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.196 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.196 { 00:18:34.196 "cntlid": 145, 00:18:34.196 "qid": 0, 00:18:34.196 "state": "enabled", 00:18:34.196 "thread": "nvmf_tgt_poll_group_000", 00:18:34.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:34.196 "listen_address": { 00:18:34.196 "trtype": "TCP", 00:18:34.196 "adrfam": "IPv4", 00:18:34.196 "traddr": "10.0.0.2", 00:18:34.196 "trsvcid": "4420" 00:18:34.196 }, 00:18:34.196 "peer_address": { 00:18:34.196 "trtype": "TCP", 00:18:34.196 "adrfam": "IPv4", 00:18:34.196 "traddr": "10.0.0.1", 00:18:34.196 "trsvcid": "38762" 00:18:34.196 }, 00:18:34.196 "auth": { 00:18:34.196 "state": "completed", 00:18:34.196 "digest": "sha512", 00:18:34.196 "dhgroup": "ffdhe8192" 00:18:34.196 } 00:18:34.196 } 00:18:34.196 ]' 00:18:34.196 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.196 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.196 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.196 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.196 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.457 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.457 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.457 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.457 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:18:34.457 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:M2MzYWFkM2ZkODRiMjAwNWFiMmYwYTc3Yjc2NTg3MDI0Y2U1MzlmNjUxZjY0NTBh7lpvtA==: --dhchap-ctrl-secret DHHC-1:03:YTYzZWVhODhkMDUzMGU1NmYxOTg4YmNjNWNhYWJkMmQ0MjViOGQ0NWYzMTY0ZTNkOWIzOTI2ZWU3ZTU1MGU3ZtmNcDo=: 00:18:35.399 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.399 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:35.399 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.399 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.399 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.399 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:18:35.399 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.399 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.399 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.399 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:35.399 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:35.399 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:35.399 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:35.399 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:35.400 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:35.400 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:35.400 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:35.400 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:35.400 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:35.660 request: 00:18:35.660 { 00:18:35.660 "name": "nvme0", 00:18:35.660 "trtype": "tcp", 00:18:35.660 "traddr": "10.0.0.2", 00:18:35.660 "adrfam": "ipv4", 00:18:35.660 "trsvcid": "4420", 00:18:35.660 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:35.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:35.660 "prchk_reftag": false, 00:18:35.660 "prchk_guard": false, 00:18:35.660 "hdgst": false, 00:18:35.660 "ddgst": false, 00:18:35.660 "dhchap_key": "key2", 00:18:35.660 "allow_unrecognized_csi": false, 00:18:35.660 "method": "bdev_nvme_attach_controller", 00:18:35.660 "req_id": 1 00:18:35.660 } 00:18:35.660 Got JSON-RPC error response 00:18:35.660 response: 00:18:35.660 { 00:18:35.660 "code": -5, 00:18:35.660 "message": "Input/output error" 00:18:35.660 } 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:35.660 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:36.232 request: 00:18:36.232 { 00:18:36.232 "name": "nvme0", 00:18:36.232 "trtype": "tcp", 00:18:36.232 "traddr": "10.0.0.2", 00:18:36.232 "adrfam": "ipv4", 00:18:36.232 "trsvcid": "4420", 00:18:36.232 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:36.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:36.232 "prchk_reftag": false, 00:18:36.232 "prchk_guard": false, 00:18:36.232 "hdgst": false, 00:18:36.232 "ddgst": false, 00:18:36.232 "dhchap_key": "key1", 00:18:36.232 "dhchap_ctrlr_key": "ckey2", 00:18:36.232 "allow_unrecognized_csi": false, 00:18:36.232 "method": "bdev_nvme_attach_controller", 00:18:36.232 "req_id": 1 00:18:36.232 } 00:18:36.232 Got JSON-RPC error response 00:18:36.232 response: 00:18:36.232 { 00:18:36.232 "code": -5, 00:18:36.232 "message": "Input/output error" 00:18:36.232 } 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:36.232 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.233 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.233 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.233 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.876 request: 00:18:36.876 { 00:18:36.876 "name": "nvme0", 00:18:36.876 "trtype": "tcp", 00:18:36.876 "traddr": "10.0.0.2", 00:18:36.876 "adrfam": "ipv4", 00:18:36.876 "trsvcid": "4420", 00:18:36.876 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:36.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:36.876 "prchk_reftag": false, 00:18:36.876 "prchk_guard": false, 00:18:36.876 "hdgst": false, 00:18:36.876 "ddgst": false, 00:18:36.876 "dhchap_key": "key1", 00:18:36.876 "dhchap_ctrlr_key": "ckey1", 00:18:36.876 "allow_unrecognized_csi": false, 00:18:36.876 "method": "bdev_nvme_attach_controller", 00:18:36.876 "req_id": 1 00:18:36.876 } 00:18:36.876 Got JSON-RPC error response 00:18:36.876 response: 00:18:36.876 { 00:18:36.876 "code": -5, 00:18:36.876 "message": "Input/output error" 00:18:36.876 } 00:18:36.876 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:36.876 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:36.876 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:36.876 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:36.876 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:36.876 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.876 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.876 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.876 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2381211 00:18:36.876 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2381211 ']' 00:18:36.876 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2381211 00:18:36.876 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:36.876 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:36.876 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2381211 00:18:36.877 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:36.877 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:36.877 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2381211' 00:18:36.877 killing process with pid 2381211 00:18:36.877 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2381211 00:18:36.877 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2381211 00:18:36.877 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:36.877 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:36.877 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:36.877 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.877 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2407510 00:18:36.877 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2407510 00:18:36.877 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:36.877 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2407510 ']' 00:18:36.877 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.877 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:36.877 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.877 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:36.877 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.883 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:37.883 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:37.883 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:37.883 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:37.883 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.883 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.883 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:37.883 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2407510 00:18:37.883 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2407510 ']' 00:18:37.883 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.883 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:37.883 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.883 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:37.883 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.883 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:37.883 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:37.883 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:37.883 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.883 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.156 null0 00:18:38.156 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.156 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:38.156 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pnP 00:18:38.156 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.156 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.NiU ]] 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NiU 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.zIN 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.4Ok ]] 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4Ok 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.cl5 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Uh4 ]] 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Uh4 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.WQw 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.157 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.097 nvme0n1 00:18:39.097 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.097 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.097 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.097 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.097 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.097 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.097 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.097 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.097 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.097 { 00:18:39.097 "cntlid": 1, 00:18:39.097 "qid": 0, 00:18:39.097 "state": "enabled", 00:18:39.097 "thread": "nvmf_tgt_poll_group_000", 00:18:39.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:39.097 "listen_address": { 00:18:39.097 "trtype": "TCP", 00:18:39.097 "adrfam": "IPv4", 00:18:39.097 "traddr": "10.0.0.2", 00:18:39.097 "trsvcid": "4420" 00:18:39.097 }, 00:18:39.097 "peer_address": { 00:18:39.097 "trtype": "TCP", 00:18:39.097 "adrfam": "IPv4", 00:18:39.097 "traddr": "10.0.0.1", 00:18:39.097 "trsvcid": "51792" 00:18:39.097 }, 00:18:39.097 "auth": { 00:18:39.097 "state": "completed", 00:18:39.097 "digest": "sha512", 00:18:39.097 "dhgroup": "ffdhe8192" 00:18:39.097 } 00:18:39.097 } 00:18:39.097 ]' 00:18:39.097 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.097 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.097 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.097 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:39.097 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.097 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.097 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.097 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.358 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:18:39.358 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:18:39.928 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.188 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:40.188 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.188 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.188 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.188 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:40.188 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.188 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.188 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.188 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:40.188 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:40.189 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:40.189 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:40.189 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:40.189 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:40.189 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.189 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:40.189 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.189 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:40.189 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.189 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.449 request: 00:18:40.449 { 00:18:40.449 "name": "nvme0", 00:18:40.449 "trtype": "tcp", 00:18:40.449 "traddr": "10.0.0.2", 00:18:40.449 "adrfam": "ipv4", 00:18:40.449 "trsvcid": "4420", 00:18:40.449 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:40.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:40.449 "prchk_reftag": false, 00:18:40.449 "prchk_guard": false, 00:18:40.449 "hdgst": false, 00:18:40.449 "ddgst": false, 00:18:40.449 "dhchap_key": "key3", 00:18:40.449 "allow_unrecognized_csi": false, 00:18:40.449 "method": "bdev_nvme_attach_controller", 00:18:40.449 "req_id": 1 00:18:40.449 } 00:18:40.449 Got JSON-RPC error response 00:18:40.449 response: 00:18:40.449 { 00:18:40.449 "code": -5, 00:18:40.449 "message": "Input/output error" 00:18:40.449 } 00:18:40.449 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:40.449 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:40.449 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:40.449 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:40.449 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:40.449 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:40.449 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:40.449 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:40.710 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:40.710 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:40.710 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:40.710 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:40.710 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.710 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:40.710 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.710 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:40.710 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.710 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.970 request: 00:18:40.970 { 00:18:40.970 "name": "nvme0", 00:18:40.970 "trtype": "tcp", 00:18:40.970 "traddr": "10.0.0.2", 00:18:40.970 "adrfam": "ipv4", 00:18:40.970 "trsvcid": "4420", 00:18:40.970 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:40.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:40.970 "prchk_reftag": false, 00:18:40.970 "prchk_guard": false, 00:18:40.970 "hdgst": false, 00:18:40.970 "ddgst": false, 00:18:40.970 "dhchap_key": "key3", 00:18:40.970 "allow_unrecognized_csi": false, 00:18:40.970 "method": "bdev_nvme_attach_controller", 00:18:40.970 "req_id": 1 00:18:40.970 } 00:18:40.970 Got JSON-RPC error response 00:18:40.970 response: 00:18:40.970 { 00:18:40.970 "code": -5, 00:18:40.970 "message": "Input/output error" 00:18:40.970 } 00:18:40.970 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:40.970 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:40.970 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:40.970 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:40.970 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:40.970 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:40.970 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:40.970 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:40.970 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:40.970 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:40.970 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:40.970 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.970 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.971 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.971 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:40.971 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.971 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.971 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.971 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:40.971 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:40.971 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:40.971 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:40.971 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.971 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:40.971 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.971 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:40.971 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:40.971 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:41.541 request: 00:18:41.541 { 00:18:41.541 "name": "nvme0", 00:18:41.541 "trtype": "tcp", 00:18:41.541 "traddr": "10.0.0.2", 00:18:41.541 "adrfam": "ipv4", 00:18:41.541 "trsvcid": "4420", 00:18:41.541 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:41.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:41.541 "prchk_reftag": false, 00:18:41.541 "prchk_guard": false, 00:18:41.541 "hdgst": false, 00:18:41.541 "ddgst": false, 00:18:41.541 "dhchap_key": "key0", 00:18:41.541 "dhchap_ctrlr_key": "key1", 00:18:41.541 "allow_unrecognized_csi": false, 00:18:41.541 "method": "bdev_nvme_attach_controller", 00:18:41.541 "req_id": 1 00:18:41.541 } 00:18:41.541 Got JSON-RPC error response 00:18:41.541 response: 00:18:41.541 { 00:18:41.541 "code": -5, 00:18:41.541 "message": "Input/output error" 00:18:41.541 } 00:18:41.541 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:41.541 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:41.541 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:41.541 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:41.541 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:41.541 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:41.541 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:41.541 nvme0n1 00:18:41.800 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:41.800 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:41.800 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.800 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.800 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.801 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.060 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:18:42.060 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.060 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.060 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.060 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:42.060 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:42.061 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:43.001 nvme0n1 00:18:43.002 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:43.002 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:43.002 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.002 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.002 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:43.002 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.002 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.002 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.002 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:43.002 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:43.002 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.262 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.262 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:18:43.262 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: --dhchap-ctrl-secret DHHC-1:03:MjIxM2Q1MzA0YWRjNmJkNmFmMjhjYmE5MDQ4MTFmOWQ1ODcwMjcyOWNmNzIxNGU1MGYwNTRjN2VkNjlkNGUzNY5uwmo=: 00:18:43.842 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:43.842 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:43.842 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:43.842 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:43.842 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:43.842 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:43.842 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:43.842 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.842 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.102 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:44.102 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:44.102 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:44.102 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:44.102 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.102 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:44.102 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.102 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:44.102 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:44.102 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:44.363 request: 00:18:44.363 { 00:18:44.363 "name": "nvme0", 00:18:44.363 "trtype": "tcp", 00:18:44.363 "traddr": "10.0.0.2", 00:18:44.363 "adrfam": "ipv4", 00:18:44.363 "trsvcid": "4420", 00:18:44.363 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:44.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:44.363 "prchk_reftag": false, 00:18:44.363 "prchk_guard": false, 00:18:44.363 "hdgst": false, 00:18:44.363 "ddgst": false, 00:18:44.363 "dhchap_key": "key1", 00:18:44.363 "allow_unrecognized_csi": false, 00:18:44.363 "method": "bdev_nvme_attach_controller", 00:18:44.363 "req_id": 1 00:18:44.363 } 00:18:44.363 Got JSON-RPC error response 00:18:44.363 response: 00:18:44.363 { 00:18:44.363 "code": -5, 00:18:44.363 "message": "Input/output error" 00:18:44.363 } 00:18:44.363 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:44.363 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:44.363 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:44.363 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:44.363 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:44.363 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:44.363 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:45.306 nvme0n1 00:18:45.306 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:45.306 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:45.306 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.306 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.306 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.306 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.566 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:45.566 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.566 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.566 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.566 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:45.566 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:45.566 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:45.825 nvme0n1 00:18:45.825 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:45.825 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:45.825 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: '' 2s 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: ]] 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTVkZjZkNWI1YzBkYmNmYTllMzViNDgyYzE2ZWU2ZGSyocAd: 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:46.086 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: 2s 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: ]] 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OTE3YjgyOWJlNmI3MzQyZWY0NWUxOGZmNGY4OTNlMjZiZDUyZmQ4YzcwNGYyZDE2OVJ28g==: 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:48.632 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:50.544 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:50.544 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:50.544 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:50.544 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:50.544 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:50.544 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:50.544 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:50.544 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.544 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:50.544 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.544 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.544 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.544 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:50.544 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:50.544 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:51.115 nvme0n1 00:18:51.115 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:51.115 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.115 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.115 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.115 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:51.115 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:51.685 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:51.685 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:51.685 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.685 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.685 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:51.685 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.685 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.685 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.685 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:51.685 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:51.947 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:51.947 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:51.947 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.947 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.947 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:51.947 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.947 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.947 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.947 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:51.947 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:51.947 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:51.947 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:51.947 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:51.947 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:52.208 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:52.208 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:52.208 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:52.468 request: 00:18:52.468 { 00:18:52.468 "name": "nvme0", 00:18:52.468 "dhchap_key": "key1", 00:18:52.468 "dhchap_ctrlr_key": "key3", 00:18:52.468 "method": "bdev_nvme_set_keys", 00:18:52.468 "req_id": 1 00:18:52.468 } 00:18:52.468 Got JSON-RPC error response 00:18:52.468 response: 00:18:52.468 { 00:18:52.468 "code": -13, 00:18:52.468 "message": "Permission denied" 00:18:52.468 } 00:18:52.468 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:52.468 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:52.468 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:52.468 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:52.468 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:52.468 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:52.468 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.728 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:52.728 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:53.668 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:53.668 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:53.668 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.929 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:53.929 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:53.929 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.929 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.929 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.929 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:53.929 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:53.929 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:54.499 nvme0n1 00:18:54.760 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:54.760 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.760 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.760 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.760 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:54.760 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:54.760 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:54.760 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:54.760 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.760 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:54.760 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.760 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:54.760 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:55.021 request: 00:18:55.021 { 00:18:55.021 "name": "nvme0", 00:18:55.021 "dhchap_key": "key2", 00:18:55.021 "dhchap_ctrlr_key": "key0", 00:18:55.021 "method": "bdev_nvme_set_keys", 00:18:55.021 "req_id": 1 00:18:55.021 } 00:18:55.021 Got JSON-RPC error response 00:18:55.021 response: 00:18:55.021 { 00:18:55.021 "code": -13, 00:18:55.021 "message": "Permission denied" 00:18:55.021 } 00:18:55.021 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:55.021 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:55.021 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:55.021 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:55.021 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:55.021 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:55.021 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.281 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:55.281 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:56.222 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:56.222 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:56.222 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.482 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:56.482 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:56.482 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:56.482 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2381436 00:18:56.482 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2381436 ']' 00:18:56.482 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2381436 00:18:56.482 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:56.482 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:56.482 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2381436 00:18:56.482 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:56.482 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:56.482 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2381436' 00:18:56.482 killing process with pid 2381436 00:18:56.482 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2381436 00:18:56.482 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2381436 00:18:56.742 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:56.742 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:56.742 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:56.742 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:56.742 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:56.742 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:56.742 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:56.742 rmmod nvme_tcp 00:18:56.742 rmmod nvme_fabrics 00:18:56.742 rmmod nvme_keyring 00:18:56.742 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:56.742 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:56.742 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:56.742 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2407510 ']' 00:18:56.742 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2407510 00:18:56.742 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2407510 ']' 00:18:56.742 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2407510 00:18:56.742 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:56.742 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:56.742 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2407510 00:18:57.003 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:57.003 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:57.003 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2407510' 00:18:57.003 killing process with pid 2407510 00:18:57.003 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2407510 00:18:57.003 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2407510 00:18:57.003 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:57.003 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:57.003 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:57.003 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:57.003 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:57.003 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:57.003 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:57.003 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:57.003 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:57.003 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.003 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.003 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.548 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:59.548 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.pnP /tmp/spdk.key-sha256.zIN /tmp/spdk.key-sha384.cl5 /tmp/spdk.key-sha512.WQw /tmp/spdk.key-sha512.NiU /tmp/spdk.key-sha384.4Ok /tmp/spdk.key-sha256.Uh4 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:59.548 00:18:59.548 real 2m36.895s 00:18:59.548 user 5m53.023s 00:18:59.548 sys 0m24.823s 00:18:59.548 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:59.548 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.548 ************************************ 00:18:59.548 END TEST nvmf_auth_target 00:18:59.548 ************************************ 00:18:59.548 14:00:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:59.548 14:00:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:59.548 14:00:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:59.548 14:00:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:59.548 14:00:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:59.548 ************************************ 00:18:59.548 START TEST nvmf_bdevio_no_huge 00:18:59.548 ************************************ 00:18:59.548 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:59.548 * Looking for test storage... 00:18:59.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:59.548 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:59.548 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:59.548 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:59.548 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:59.548 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:59.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.549 --rc genhtml_branch_coverage=1 00:18:59.549 --rc genhtml_function_coverage=1 00:18:59.549 --rc genhtml_legend=1 00:18:59.549 --rc geninfo_all_blocks=1 00:18:59.549 --rc geninfo_unexecuted_blocks=1 00:18:59.549 00:18:59.549 ' 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:59.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.549 --rc genhtml_branch_coverage=1 00:18:59.549 --rc genhtml_function_coverage=1 00:18:59.549 --rc genhtml_legend=1 00:18:59.549 --rc geninfo_all_blocks=1 00:18:59.549 --rc geninfo_unexecuted_blocks=1 00:18:59.549 00:18:59.549 ' 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:59.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.549 --rc genhtml_branch_coverage=1 00:18:59.549 --rc genhtml_function_coverage=1 00:18:59.549 --rc genhtml_legend=1 00:18:59.549 --rc geninfo_all_blocks=1 00:18:59.549 --rc geninfo_unexecuted_blocks=1 00:18:59.549 00:18:59.549 ' 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:59.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.549 --rc genhtml_branch_coverage=1 00:18:59.549 --rc genhtml_function_coverage=1 00:18:59.549 --rc genhtml_legend=1 00:18:59.549 --rc geninfo_all_blocks=1 00:18:59.549 --rc geninfo_unexecuted_blocks=1 00:18:59.549 00:18:59.549 ' 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.549 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:59.550 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:59.550 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:59.550 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.550 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.550 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.550 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:59.550 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:59.550 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:59.550 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:07.694 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:07.694 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:07.694 Found net devices under 0000:31:00.0: cvl_0_0 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.694 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:07.695 Found net devices under 0000:31:00.1: cvl_0_1 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:07.695 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:07.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:19:07.695 00:19:07.695 --- 10.0.0.2 ping statistics --- 00:19:07.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.695 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:07.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:19:07.695 00:19:07.695 --- 10.0.0.1 ping statistics --- 00:19:07.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.695 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2415702 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2415702 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 2415702 ']' 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:07.695 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.695 [2024-11-06 14:00:53.247612] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:19:07.695 [2024-11-06 14:00:53.247684] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:07.695 [2024-11-06 14:00:53.358449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:07.695 [2024-11-06 14:00:53.417280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.695 [2024-11-06 14:00:53.417323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.695 [2024-11-06 14:00:53.417332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.695 [2024-11-06 14:00:53.417339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.695 [2024-11-06 14:00:53.417345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.695 [2024-11-06 14:00:53.418873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:07.695 [2024-11-06 14:00:53.419147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:07.695 [2024-11-06 14:00:53.419419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:07.695 [2024-11-06 14:00:53.419422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.957 [2024-11-06 14:00:54.123705] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.957 Malloc0 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.957 [2024-11-06 14:00:54.177478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:07.957 { 00:19:07.957 "params": { 00:19:07.957 "name": "Nvme$subsystem", 00:19:07.957 "trtype": "$TEST_TRANSPORT", 00:19:07.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:07.957 "adrfam": "ipv4", 00:19:07.957 "trsvcid": "$NVMF_PORT", 00:19:07.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:07.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:07.957 "hdgst": ${hdgst:-false}, 00:19:07.957 "ddgst": ${ddgst:-false} 00:19:07.957 }, 00:19:07.957 "method": "bdev_nvme_attach_controller" 00:19:07.957 } 00:19:07.957 EOF 00:19:07.957 )") 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:07.957 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:07.957 "params": { 00:19:07.957 "name": "Nvme1", 00:19:07.957 "trtype": "tcp", 00:19:07.957 "traddr": "10.0.0.2", 00:19:07.957 "adrfam": "ipv4", 00:19:07.957 "trsvcid": "4420", 00:19:07.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:07.957 "hdgst": false, 00:19:07.957 "ddgst": false 00:19:07.957 }, 00:19:07.957 "method": "bdev_nvme_attach_controller" 00:19:07.957 }' 00:19:07.957 [2024-11-06 14:00:54.234612] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:19:07.957 [2024-11-06 14:00:54.234685] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2415761 ] 00:19:08.219 [2024-11-06 14:00:54.332757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:08.219 [2024-11-06 14:00:54.393617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.220 [2024-11-06 14:00:54.393786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.220 [2024-11-06 14:00:54.393788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.481 I/O targets: 00:19:08.481 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:08.481 00:19:08.481 00:19:08.481 CUnit - A unit testing framework for C - Version 2.1-3 00:19:08.481 http://cunit.sourceforge.net/ 00:19:08.481 00:19:08.481 00:19:08.481 Suite: bdevio tests on: Nvme1n1 00:19:08.481 Test: blockdev write read block ...passed 00:19:08.481 Test: blockdev write zeroes read block ...passed 00:19:08.481 Test: blockdev write zeroes read no split ...passed 00:19:08.481 Test: blockdev write zeroes read split ...passed 00:19:08.481 Test: blockdev write zeroes read split partial ...passed 00:19:08.481 Test: blockdev reset ...[2024-11-06 14:00:54.751674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:08.481 [2024-11-06 14:00:54.751785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976400 (9): Bad file descriptor 00:19:08.742 [2024-11-06 14:00:54.847189] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:08.742 passed 00:19:08.742 Test: blockdev write read 8 blocks ...passed 00:19:08.742 Test: blockdev write read size > 128k ...passed 00:19:08.742 Test: blockdev write read invalid size ...passed 00:19:08.742 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:08.742 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:08.742 Test: blockdev write read max offset ...passed 00:19:08.742 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:08.742 Test: blockdev writev readv 8 blocks ...passed 00:19:09.004 Test: blockdev writev readv 30 x 1block ...passed 00:19:09.004 Test: blockdev writev readv block ...passed 00:19:09.004 Test: blockdev writev readv size > 128k ...passed 00:19:09.004 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:09.004 Test: blockdev comparev and writev ...[2024-11-06 14:00:55.069275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.004 [2024-11-06 14:00:55.069323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:09.004 [2024-11-06 14:00:55.069340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.004 [2024-11-06 14:00:55.069350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:09.004 [2024-11-06 14:00:55.069794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.004 [2024-11-06 14:00:55.069808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:09.004 [2024-11-06 14:00:55.069822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.004 [2024-11-06 14:00:55.069837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:09.004 [2024-11-06 14:00:55.070254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.004 [2024-11-06 14:00:55.070267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:09.004 [2024-11-06 14:00:55.070282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.004 [2024-11-06 14:00:55.070290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:09.004 [2024-11-06 14:00:55.070726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.004 [2024-11-06 14:00:55.070738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:09.004 [2024-11-06 14:00:55.070759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.004 [2024-11-06 14:00:55.070767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:09.004 passed 00:19:09.004 Test: blockdev nvme passthru rw ...passed 00:19:09.004 Test: blockdev nvme passthru vendor specific ...[2024-11-06 14:00:55.155192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:09.004 [2024-11-06 14:00:55.155207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:09.004 [2024-11-06 14:00:55.155437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:09.004 [2024-11-06 14:00:55.155447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:09.004 [2024-11-06 14:00:55.155673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:09.004 [2024-11-06 14:00:55.155684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:09.004 [2024-11-06 14:00:55.155957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:09.004 [2024-11-06 14:00:55.155968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:09.004 passed 00:19:09.004 Test: blockdev nvme admin passthru ...passed 00:19:09.004 Test: blockdev copy ...passed 00:19:09.004 00:19:09.004 Run Summary: Type Total Ran Passed Failed Inactive 00:19:09.004 suites 1 1 n/a 0 0 00:19:09.004 tests 23 23 23 0 0 00:19:09.004 asserts 152 152 152 0 n/a 00:19:09.004 00:19:09.004 Elapsed time = 1.202 seconds 00:19:09.265 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:09.265 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.265 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.265 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.265 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:09.265 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:09.265 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:09.265 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:09.265 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:09.265 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:09.265 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:09.265 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:09.265 rmmod nvme_tcp 00:19:09.526 rmmod nvme_fabrics 00:19:09.526 rmmod nvme_keyring 00:19:09.526 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:09.526 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:09.526 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:09.526 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2415702 ']' 00:19:09.526 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2415702 00:19:09.526 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 2415702 ']' 00:19:09.526 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 2415702 00:19:09.526 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:19:09.526 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:09.526 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2415702 00:19:09.526 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:19:09.526 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:19:09.526 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2415702' 00:19:09.526 killing process with pid 2415702 00:19:09.526 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 2415702 00:19:09.526 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 2415702 00:19:09.788 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:09.788 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:09.788 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:09.788 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:09.788 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:09.788 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:09.788 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:09.788 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:09.788 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:09.788 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.788 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.788 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:12.334 00:19:12.334 real 0m12.781s 00:19:12.334 user 0m14.510s 00:19:12.334 sys 0m6.866s 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:12.334 ************************************ 00:19:12.334 END TEST nvmf_bdevio_no_huge 00:19:12.334 ************************************ 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:12.334 ************************************ 00:19:12.334 START TEST nvmf_tls 00:19:12.334 ************************************ 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:12.334 * Looking for test storage... 00:19:12.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:12.334 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:12.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.334 --rc genhtml_branch_coverage=1 00:19:12.334 --rc genhtml_function_coverage=1 00:19:12.334 --rc genhtml_legend=1 00:19:12.335 --rc geninfo_all_blocks=1 00:19:12.335 --rc geninfo_unexecuted_blocks=1 00:19:12.335 00:19:12.335 ' 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:12.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.335 --rc genhtml_branch_coverage=1 00:19:12.335 --rc genhtml_function_coverage=1 00:19:12.335 --rc genhtml_legend=1 00:19:12.335 --rc geninfo_all_blocks=1 00:19:12.335 --rc geninfo_unexecuted_blocks=1 00:19:12.335 00:19:12.335 ' 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:12.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.335 --rc genhtml_branch_coverage=1 00:19:12.335 --rc genhtml_function_coverage=1 00:19:12.335 --rc genhtml_legend=1 00:19:12.335 --rc geninfo_all_blocks=1 00:19:12.335 --rc geninfo_unexecuted_blocks=1 00:19:12.335 00:19:12.335 ' 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:12.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.335 --rc genhtml_branch_coverage=1 00:19:12.335 --rc genhtml_function_coverage=1 00:19:12.335 --rc genhtml_legend=1 00:19:12.335 --rc geninfo_all_blocks=1 00:19:12.335 --rc geninfo_unexecuted_blocks=1 00:19:12.335 00:19:12.335 ' 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:12.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:12.335 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.477 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.477 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:20.477 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:20.477 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:20.477 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:20.477 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:20.477 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:20.477 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:20.477 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:20.477 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:20.477 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:20.477 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:20.477 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:20.477 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:20.477 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:20.477 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.477 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:20.478 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:20.478 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:20.478 Found net devices under 0000:31:00.0: cvl_0_0 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:20.478 Found net devices under 0000:31:00.1: cvl_0_1 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:20.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:19:20.478 00:19:20.478 --- 10.0.0.2 ping statistics --- 00:19:20.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.478 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:20.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:19:20.478 00:19:20.478 --- 10.0.0.1 ping statistics --- 00:19:20.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.478 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:20.478 14:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:20.478 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:20.478 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:20.478 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:20.478 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.478 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2420442 00:19:20.478 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2420442 00:19:20.478 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:20.478 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2420442 ']' 00:19:20.478 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.479 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:20.479 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.479 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:20.479 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.479 [2024-11-06 14:01:06.111946] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:19:20.479 [2024-11-06 14:01:06.112010] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.479 [2024-11-06 14:01:06.215952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.479 [2024-11-06 14:01:06.266127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.479 [2024-11-06 14:01:06.266180] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.479 [2024-11-06 14:01:06.266189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.479 [2024-11-06 14:01:06.266197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.479 [2024-11-06 14:01:06.266203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.479 [2024-11-06 14:01:06.267002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.739 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:20.739 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:20.739 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:20.739 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:20.739 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.740 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.740 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:20.740 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:21.000 true 00:19:21.000 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:21.000 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:21.261 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:21.261 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:21.261 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:21.261 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:21.261 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:21.522 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:21.522 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:21.522 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:21.782 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:21.782 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:22.043 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:22.043 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:22.043 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:22.043 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:22.043 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:22.043 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:22.043 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:22.304 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:22.304 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:22.565 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:22.565 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:22.565 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:22.565 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:22.565 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:22.825 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:22.825 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:22.825 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:22.826 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:22.826 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:22.826 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:22.826 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:22.826 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:22.826 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:22.826 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:22.826 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:22.826 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:22.826 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:22.826 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:22.826 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:22.826 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:22.826 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:23.086 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:23.086 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:23.086 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.nHv1jZXlG7 00:19:23.086 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:23.086 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Hit83VOK7u 00:19:23.086 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:23.086 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:23.086 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.nHv1jZXlG7 00:19:23.086 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Hit83VOK7u 00:19:23.086 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:23.086 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:23.347 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.nHv1jZXlG7 00:19:23.347 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.nHv1jZXlG7 00:19:23.347 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:23.607 [2024-11-06 14:01:09.702532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.607 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:23.867 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:23.867 [2024-11-06 14:01:10.039359] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:23.867 [2024-11-06 14:01:10.039570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.867 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:24.127 malloc0 00:19:24.127 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:24.387 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.nHv1jZXlG7 00:19:24.387 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:24.647 14:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.nHv1jZXlG7 00:19:34.641 Initializing NVMe Controllers 00:19:34.641 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:34.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:34.641 Initialization complete. Launching workers. 00:19:34.641 ======================================================== 00:19:34.641 Latency(us) 00:19:34.641 Device Information : IOPS MiB/s Average min max 00:19:34.641 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18644.67 72.83 3432.83 1154.44 4361.39 00:19:34.641 ======================================================== 00:19:34.641 Total : 18644.67 72.83 3432.83 1154.44 4361.39 00:19:34.641 00:19:34.641 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nHv1jZXlG7 00:19:34.641 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:34.641 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:34.641 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:34.641 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.nHv1jZXlG7 00:19:34.641 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:34.641 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2423199 00:19:34.641 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:34.641 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2423199 /var/tmp/bdevperf.sock 00:19:34.641 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:34.641 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2423199 ']' 00:19:34.641 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.641 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:34.641 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.641 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:34.641 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.901 [2024-11-06 14:01:20.921925] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:19:34.901 [2024-11-06 14:01:20.921981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2423199 ] 00:19:34.901 [2024-11-06 14:01:21.009503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.901 [2024-11-06 14:01:21.045039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.472 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:35.472 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:35.472 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nHv1jZXlG7 00:19:35.789 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:35.789 [2024-11-06 14:01:22.034203] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:36.069 TLSTESTn1 00:19:36.069 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:36.069 Running I/O for 10 seconds... 00:19:38.004 5605.00 IOPS, 21.89 MiB/s [2024-11-06T13:01:25.667Z] 5927.50 IOPS, 23.15 MiB/s [2024-11-06T13:01:26.238Z] 6069.67 IOPS, 23.71 MiB/s [2024-11-06T13:01:27.619Z] 5995.75 IOPS, 23.42 MiB/s [2024-11-06T13:01:28.560Z] 5948.00 IOPS, 23.23 MiB/s [2024-11-06T13:01:29.500Z] 5982.83 IOPS, 23.37 MiB/s [2024-11-06T13:01:30.440Z] 6019.71 IOPS, 23.51 MiB/s [2024-11-06T13:01:31.380Z] 6027.25 IOPS, 23.54 MiB/s [2024-11-06T13:01:32.320Z] 6061.56 IOPS, 23.68 MiB/s [2024-11-06T13:01:32.320Z] 6057.70 IOPS, 23.66 MiB/s 00:19:46.040 Latency(us) 00:19:46.040 [2024-11-06T13:01:32.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.040 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:46.040 Verification LBA range: start 0x0 length 0x2000 00:19:46.040 TLSTESTn1 : 10.01 6064.13 23.69 0.00 0.00 21072.52 4041.39 26214.40 00:19:46.040 [2024-11-06T13:01:32.320Z] =================================================================================================================== 00:19:46.040 [2024-11-06T13:01:32.320Z] Total : 6064.13 23.69 0.00 0.00 21072.52 4041.39 26214.40 00:19:46.040 { 00:19:46.040 "results": [ 00:19:46.040 { 00:19:46.040 "job": "TLSTESTn1", 00:19:46.040 "core_mask": "0x4", 00:19:46.041 "workload": "verify", 00:19:46.041 "status": "finished", 00:19:46.041 "verify_range": { 00:19:46.041 "start": 0, 00:19:46.041 "length": 8192 00:19:46.041 }, 00:19:46.041 "queue_depth": 128, 00:19:46.041 "io_size": 4096, 00:19:46.041 "runtime": 10.01017, 00:19:46.041 "iops": 6064.1327769658255, 00:19:46.041 "mibps": 23.688018660022756, 00:19:46.041 "io_failed": 0, 00:19:46.041 "io_timeout": 0, 00:19:46.041 "avg_latency_us": 21072.519983965645, 00:19:46.041 "min_latency_us": 4041.3866666666668, 00:19:46.041 "max_latency_us": 26214.4 00:19:46.041 } 00:19:46.041 ], 00:19:46.041 "core_count": 1 00:19:46.041 } 00:19:46.041 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:46.041 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2423199 00:19:46.041 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2423199 ']' 00:19:46.041 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2423199 00:19:46.041 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:46.041 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:46.041 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2423199 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2423199' 00:19:46.302 killing process with pid 2423199 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2423199 00:19:46.302 Received shutdown signal, test time was about 10.000000 seconds 00:19:46.302 00:19:46.302 Latency(us) 00:19:46.302 [2024-11-06T13:01:32.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.302 [2024-11-06T13:01:32.582Z] =================================================================================================================== 00:19:46.302 [2024-11-06T13:01:32.582Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2423199 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hit83VOK7u 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hit83VOK7u 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hit83VOK7u 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Hit83VOK7u 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2425526 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2425526 /var/tmp/bdevperf.sock 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2425526 ']' 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:46.302 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.302 [2024-11-06 14:01:32.502696] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:19:46.302 [2024-11-06 14:01:32.502769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2425526 ] 00:19:46.563 [2024-11-06 14:01:32.587081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.563 [2024-11-06 14:01:32.615716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.133 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:47.133 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:47.133 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Hit83VOK7u 00:19:47.395 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:47.395 [2024-11-06 14:01:33.579368] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.395 [2024-11-06 14:01:33.589825] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:47.395 [2024-11-06 14:01:33.590502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x880bc0 (107): Transport endpoint is not connected 00:19:47.395 [2024-11-06 14:01:33.591498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x880bc0 (9): Bad file descriptor 00:19:47.395 [2024-11-06 14:01:33.592501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:47.395 [2024-11-06 14:01:33.592511] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:47.395 [2024-11-06 14:01:33.592517] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:47.395 [2024-11-06 14:01:33.592525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:47.395 request: 00:19:47.395 { 00:19:47.395 "name": "TLSTEST", 00:19:47.395 "trtype": "tcp", 00:19:47.395 "traddr": "10.0.0.2", 00:19:47.395 "adrfam": "ipv4", 00:19:47.395 "trsvcid": "4420", 00:19:47.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:47.395 "prchk_reftag": false, 00:19:47.395 "prchk_guard": false, 00:19:47.395 "hdgst": false, 00:19:47.395 "ddgst": false, 00:19:47.395 "psk": "key0", 00:19:47.395 "allow_unrecognized_csi": false, 00:19:47.395 "method": "bdev_nvme_attach_controller", 00:19:47.395 "req_id": 1 00:19:47.395 } 00:19:47.395 Got JSON-RPC error response 00:19:47.395 response: 00:19:47.395 { 00:19:47.395 "code": -5, 00:19:47.395 "message": "Input/output error" 00:19:47.395 } 00:19:47.395 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2425526 00:19:47.395 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2425526 ']' 00:19:47.395 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2425526 00:19:47.395 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:47.395 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:47.395 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2425526 00:19:47.656 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:47.656 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:47.656 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2425526' 00:19:47.656 killing process with pid 2425526 00:19:47.656 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2425526 00:19:47.656 Received shutdown signal, test time was about 10.000000 seconds 00:19:47.656 00:19:47.656 Latency(us) 00:19:47.656 [2024-11-06T13:01:33.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.656 [2024-11-06T13:01:33.936Z] =================================================================================================================== 00:19:47.656 [2024-11-06T13:01:33.936Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:47.656 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2425526 00:19:47.656 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:47.656 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:47.656 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:47.656 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:47.656 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:47.656 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.nHv1jZXlG7 00:19:47.656 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:47.656 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.nHv1jZXlG7 00:19:47.656 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:47.656 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:47.657 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:47.657 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:47.657 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.nHv1jZXlG7 00:19:47.657 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:47.657 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:47.657 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:47.657 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.nHv1jZXlG7 00:19:47.657 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:47.657 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2425873 00:19:47.657 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:47.657 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2425873 /var/tmp/bdevperf.sock 00:19:47.657 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:47.657 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2425873 ']' 00:19:47.657 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.657 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:47.657 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.657 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:47.657 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.657 [2024-11-06 14:01:33.826102] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:19:47.657 [2024-11-06 14:01:33.826156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2425873 ] 00:19:47.657 [2024-11-06 14:01:33.908104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.917 [2024-11-06 14:01:33.936727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.490 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:48.490 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:48.490 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nHv1jZXlG7 00:19:48.490 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:48.751 [2024-11-06 14:01:34.900421] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:48.751 [2024-11-06 14:01:34.904826] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:48.751 [2024-11-06 14:01:34.904845] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:48.751 [2024-11-06 14:01:34.904864] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:48.751 [2024-11-06 14:01:34.905511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x587bc0 (107): Transport endpoint is not connected 00:19:48.751 [2024-11-06 14:01:34.906506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x587bc0 (9): Bad file descriptor 00:19:48.751 [2024-11-06 14:01:34.907507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:48.751 [2024-11-06 14:01:34.907515] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:48.751 [2024-11-06 14:01:34.907520] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:48.751 [2024-11-06 14:01:34.907529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:48.751 request: 00:19:48.751 { 00:19:48.751 "name": "TLSTEST", 00:19:48.751 "trtype": "tcp", 00:19:48.751 "traddr": "10.0.0.2", 00:19:48.751 "adrfam": "ipv4", 00:19:48.751 "trsvcid": "4420", 00:19:48.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.751 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:48.751 "prchk_reftag": false, 00:19:48.751 "prchk_guard": false, 00:19:48.751 "hdgst": false, 00:19:48.751 "ddgst": false, 00:19:48.751 "psk": "key0", 00:19:48.751 "allow_unrecognized_csi": false, 00:19:48.751 "method": "bdev_nvme_attach_controller", 00:19:48.751 "req_id": 1 00:19:48.751 } 00:19:48.751 Got JSON-RPC error response 00:19:48.751 response: 00:19:48.751 { 00:19:48.751 "code": -5, 00:19:48.751 "message": "Input/output error" 00:19:48.751 } 00:19:48.751 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2425873 00:19:48.751 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2425873 ']' 00:19:48.751 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2425873 00:19:48.751 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:48.751 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:48.751 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2425873 00:19:48.751 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:48.751 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:48.751 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2425873' 00:19:48.751 killing process with pid 2425873 00:19:48.751 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2425873 00:19:48.751 Received shutdown signal, test time was about 10.000000 seconds 00:19:48.751 00:19:48.751 Latency(us) 00:19:48.751 [2024-11-06T13:01:35.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.751 [2024-11-06T13:01:35.031Z] =================================================================================================================== 00:19:48.751 [2024-11-06T13:01:35.031Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:48.751 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2425873 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.nHv1jZXlG7 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.nHv1jZXlG7 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.nHv1jZXlG7 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.nHv1jZXlG7 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2426034 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:49.011 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2426034 /var/tmp/bdevperf.sock 00:19:49.012 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:49.012 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2426034 ']' 00:19:49.012 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.012 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:49.012 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.012 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:49.012 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.012 [2024-11-06 14:01:35.136321] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:19:49.012 [2024-11-06 14:01:35.136376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426034 ] 00:19:49.012 [2024-11-06 14:01:35.219305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.012 [2024-11-06 14:01:35.248151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.952 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:49.952 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:49.952 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nHv1jZXlG7 00:19:49.952 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:50.212 [2024-11-06 14:01:36.239978] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.212 [2024-11-06 14:01:36.246193] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:50.212 [2024-11-06 14:01:36.246211] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:50.212 [2024-11-06 14:01:36.246229] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:50.212 [2024-11-06 14:01:36.247081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1ebc0 (107): Transport endpoint is not connected 00:19:50.212 [2024-11-06 14:01:36.248077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1ebc0 (9): Bad file descriptor 00:19:50.212 [2024-11-06 14:01:36.249079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:50.212 [2024-11-06 14:01:36.249086] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:50.212 [2024-11-06 14:01:36.249092] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:50.212 [2024-11-06 14:01:36.249100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:50.212 request: 00:19:50.212 { 00:19:50.212 "name": "TLSTEST", 00:19:50.212 "trtype": "tcp", 00:19:50.212 "traddr": "10.0.0.2", 00:19:50.212 "adrfam": "ipv4", 00:19:50.212 "trsvcid": "4420", 00:19:50.212 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:50.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:50.212 "prchk_reftag": false, 00:19:50.212 "prchk_guard": false, 00:19:50.212 "hdgst": false, 00:19:50.212 "ddgst": false, 00:19:50.212 "psk": "key0", 00:19:50.212 "allow_unrecognized_csi": false, 00:19:50.212 "method": "bdev_nvme_attach_controller", 00:19:50.212 "req_id": 1 00:19:50.212 } 00:19:50.212 Got JSON-RPC error response 00:19:50.212 response: 00:19:50.212 { 00:19:50.212 "code": -5, 00:19:50.212 "message": "Input/output error" 00:19:50.212 } 00:19:50.212 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2426034 00:19:50.212 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2426034 ']' 00:19:50.212 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2426034 00:19:50.212 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2426034 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2426034' 00:19:50.213 killing process with pid 2426034 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2426034 00:19:50.213 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.213 00:19:50.213 Latency(us) 00:19:50.213 [2024-11-06T13:01:36.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.213 [2024-11-06T13:01:36.493Z] =================================================================================================================== 00:19:50.213 [2024-11-06T13:01:36.493Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2426034 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2426236 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2426236 /var/tmp/bdevperf.sock 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2426236 ']' 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:50.213 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.213 [2024-11-06 14:01:36.482487] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:19:50.213 [2024-11-06 14:01:36.482544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426236 ] 00:19:50.473 [2024-11-06 14:01:36.565988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.473 [2024-11-06 14:01:36.594818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.043 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:51.043 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:51.043 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:51.303 [2024-11-06 14:01:37.418224] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:51.303 [2024-11-06 14:01:37.418244] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:51.303 request: 00:19:51.303 { 00:19:51.303 "name": "key0", 00:19:51.303 "path": "", 00:19:51.303 "method": "keyring_file_add_key", 00:19:51.303 "req_id": 1 00:19:51.303 } 00:19:51.303 Got JSON-RPC error response 00:19:51.303 response: 00:19:51.303 { 00:19:51.303 "code": -1, 00:19:51.303 "message": "Operation not permitted" 00:19:51.303 } 00:19:51.303 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:51.564 [2024-11-06 14:01:37.594752] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.564 [2024-11-06 14:01:37.594771] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:51.564 request: 00:19:51.564 { 00:19:51.564 "name": "TLSTEST", 00:19:51.564 "trtype": "tcp", 00:19:51.564 "traddr": "10.0.0.2", 00:19:51.564 "adrfam": "ipv4", 00:19:51.564 "trsvcid": "4420", 00:19:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.564 "prchk_reftag": false, 00:19:51.564 "prchk_guard": false, 00:19:51.564 "hdgst": false, 00:19:51.564 "ddgst": false, 00:19:51.564 "psk": "key0", 00:19:51.564 "allow_unrecognized_csi": false, 00:19:51.564 "method": "bdev_nvme_attach_controller", 00:19:51.564 "req_id": 1 00:19:51.564 } 00:19:51.564 Got JSON-RPC error response 00:19:51.564 response: 00:19:51.564 { 00:19:51.564 "code": -126, 00:19:51.564 "message": "Required key not available" 00:19:51.564 } 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2426236 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2426236 ']' 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2426236 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2426236 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2426236' 00:19:51.564 killing process with pid 2426236 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2426236 00:19:51.564 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.564 00:19:51.564 Latency(us) 00:19:51.564 [2024-11-06T13:01:37.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.564 [2024-11-06T13:01:37.844Z] =================================================================================================================== 00:19:51.564 [2024-11-06T13:01:37.844Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2426236 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2420442 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2420442 ']' 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2420442 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:51.564 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2420442 00:19:51.825 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:51.825 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:51.825 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2420442' 00:19:51.825 killing process with pid 2420442 00:19:51.825 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2420442 00:19:51.825 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2420442 00:19:51.825 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:51.825 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:51.825 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:51.825 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:51.825 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:51.825 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:51.825 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:51.825 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:51.825 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:51.825 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.gBlnJbMHZu 00:19:51.825 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:51.825 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.gBlnJbMHZu 00:19:51.825 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:51.825 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:51.825 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:51.825 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.825 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2426596 00:19:51.825 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2426596 00:19:51.825 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:51.825 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2426596 ']' 00:19:51.825 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.825 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:51.825 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.825 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:51.825 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.825 [2024-11-06 14:01:38.080938] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:19:51.825 [2024-11-06 14:01:38.080991] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.086 [2024-11-06 14:01:38.172180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.086 [2024-11-06 14:01:38.200382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.086 [2024-11-06 14:01:38.200409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.086 [2024-11-06 14:01:38.200414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.086 [2024-11-06 14:01:38.200419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.086 [2024-11-06 14:01:38.200423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.086 [2024-11-06 14:01:38.200917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.658 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:52.658 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:52.658 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:52.658 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:52.658 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.658 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.658 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.gBlnJbMHZu 00:19:52.658 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gBlnJbMHZu 00:19:52.658 14:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:52.920 [2024-11-06 14:01:39.049533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.920 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:53.179 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:53.179 [2024-11-06 14:01:39.410415] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.179 [2024-11-06 14:01:39.410608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.179 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:53.441 malloc0 00:19:53.441 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:53.701 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gBlnJbMHZu 00:19:53.961 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:53.961 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gBlnJbMHZu 00:19:53.961 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:53.961 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:53.961 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:53.961 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gBlnJbMHZu 00:19:53.961 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:53.961 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2426992 00:19:53.961 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:53.961 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2426992 /var/tmp/bdevperf.sock 00:19:53.961 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:53.961 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2426992 ']' 00:19:53.961 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.961 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:53.961 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.961 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:53.961 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.961 [2024-11-06 14:01:40.209537] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:19:53.961 [2024-11-06 14:01:40.209593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426992 ] 00:19:54.222 [2024-11-06 14:01:40.272119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.222 [2024-11-06 14:01:40.301064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.222 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:54.222 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:54.222 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gBlnJbMHZu 00:19:54.483 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:54.483 [2024-11-06 14:01:40.727454] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.744 TLSTESTn1 00:19:54.744 14:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:54.744 Running I/O for 10 seconds... 00:19:57.070 6185.00 IOPS, 24.16 MiB/s [2024-11-06T13:01:44.291Z] 6083.50 IOPS, 23.76 MiB/s [2024-11-06T13:01:45.233Z] 5667.67 IOPS, 22.14 MiB/s [2024-11-06T13:01:46.174Z] 5669.25 IOPS, 22.15 MiB/s [2024-11-06T13:01:47.113Z] 5753.80 IOPS, 22.48 MiB/s [2024-11-06T13:01:48.053Z] 5677.83 IOPS, 22.18 MiB/s [2024-11-06T13:01:48.994Z] 5627.57 IOPS, 21.98 MiB/s [2024-11-06T13:01:49.936Z] 5628.62 IOPS, 21.99 MiB/s [2024-11-06T13:01:51.321Z] 5627.78 IOPS, 21.98 MiB/s [2024-11-06T13:01:51.321Z] 5667.40 IOPS, 22.14 MiB/s 00:20:05.041 Latency(us) 00:20:05.041 [2024-11-06T13:01:51.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.041 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:05.041 Verification LBA range: start 0x0 length 0x2000 00:20:05.041 TLSTESTn1 : 10.05 5653.59 22.08 0.00 0.00 22579.42 6116.69 45656.75 00:20:05.041 [2024-11-06T13:01:51.321Z] =================================================================================================================== 00:20:05.041 [2024-11-06T13:01:51.321Z] Total : 5653.59 22.08 0.00 0.00 22579.42 6116.69 45656.75 00:20:05.041 { 00:20:05.041 "results": [ 00:20:05.041 { 00:20:05.041 "job": "TLSTESTn1", 00:20:05.041 "core_mask": "0x4", 00:20:05.041 "workload": "verify", 00:20:05.041 "status": "finished", 00:20:05.041 "verify_range": { 00:20:05.041 "start": 0, 00:20:05.041 "length": 8192 00:20:05.041 }, 00:20:05.041 "queue_depth": 128, 00:20:05.041 "io_size": 4096, 00:20:05.041 "runtime": 10.047065, 00:20:05.041 "iops": 5653.591372206709, 00:20:05.041 "mibps": 22.084341297682457, 00:20:05.041 "io_failed": 0, 00:20:05.041 "io_timeout": 0, 00:20:05.041 "avg_latency_us": 22579.417412062954, 00:20:05.041 "min_latency_us": 6116.693333333334, 00:20:05.041 "max_latency_us": 45656.746666666666 00:20:05.041 } 00:20:05.041 ], 00:20:05.041 "core_count": 1 00:20:05.041 } 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2426992 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2426992 ']' 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2426992 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2426992 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2426992' 00:20:05.041 killing process with pid 2426992 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2426992 00:20:05.041 Received shutdown signal, test time was about 10.000000 seconds 00:20:05.041 00:20:05.041 Latency(us) 00:20:05.041 [2024-11-06T13:01:51.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.041 [2024-11-06T13:01:51.321Z] =================================================================================================================== 00:20:05.041 [2024-11-06T13:01:51.321Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2426992 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.gBlnJbMHZu 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gBlnJbMHZu 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gBlnJbMHZu 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gBlnJbMHZu 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gBlnJbMHZu 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2429264 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2429264 /var/tmp/bdevperf.sock 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2429264 ']' 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:05.041 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.041 [2024-11-06 14:01:51.234997] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:20:05.041 [2024-11-06 14:01:51.235053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2429264 ] 00:20:05.041 [2024-11-06 14:01:51.317767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.302 [2024-11-06 14:01:51.345478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.873 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:05.873 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:05.873 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gBlnJbMHZu 00:20:06.134 [2024-11-06 14:01:52.188776] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gBlnJbMHZu': 0100666 00:20:06.134 [2024-11-06 14:01:52.188800] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:06.134 request: 00:20:06.134 { 00:20:06.134 "name": "key0", 00:20:06.134 "path": "/tmp/tmp.gBlnJbMHZu", 00:20:06.134 "method": "keyring_file_add_key", 00:20:06.134 "req_id": 1 00:20:06.134 } 00:20:06.134 Got JSON-RPC error response 00:20:06.134 response: 00:20:06.134 { 00:20:06.134 "code": -1, 00:20:06.134 "message": "Operation not permitted" 00:20:06.134 } 00:20:06.134 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:06.134 [2024-11-06 14:01:52.369300] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.134 [2024-11-06 14:01:52.369320] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:06.134 request: 00:20:06.134 { 00:20:06.134 "name": "TLSTEST", 00:20:06.134 "trtype": "tcp", 00:20:06.134 "traddr": "10.0.0.2", 00:20:06.134 "adrfam": "ipv4", 00:20:06.134 "trsvcid": "4420", 00:20:06.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.134 "prchk_reftag": false, 00:20:06.134 "prchk_guard": false, 00:20:06.134 "hdgst": false, 00:20:06.134 "ddgst": false, 00:20:06.134 "psk": "key0", 00:20:06.134 "allow_unrecognized_csi": false, 00:20:06.135 "method": "bdev_nvme_attach_controller", 00:20:06.135 "req_id": 1 00:20:06.135 } 00:20:06.135 Got JSON-RPC error response 00:20:06.135 response: 00:20:06.135 { 00:20:06.135 "code": -126, 00:20:06.135 "message": "Required key not available" 00:20:06.135 } 00:20:06.135 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2429264 00:20:06.135 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2429264 ']' 00:20:06.135 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2429264 00:20:06.135 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:06.135 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:06.135 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2429264 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2429264' 00:20:06.396 killing process with pid 2429264 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2429264 00:20:06.396 Received shutdown signal, test time was about 10.000000 seconds 00:20:06.396 00:20:06.396 Latency(us) 00:20:06.396 [2024-11-06T13:01:52.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.396 [2024-11-06T13:01:52.676Z] =================================================================================================================== 00:20:06.396 [2024-11-06T13:01:52.676Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2429264 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2426596 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2426596 ']' 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2426596 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2426596 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2426596' 00:20:06.396 killing process with pid 2426596 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2426596 00:20:06.396 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2426596 00:20:06.657 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:06.657 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:06.657 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:06.657 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.657 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2429525 00:20:06.657 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2429525 00:20:06.657 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:06.657 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2429525 ']' 00:20:06.657 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.657 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:06.657 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.657 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:06.657 14:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.657 [2024-11-06 14:01:52.799551] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:20:06.657 [2024-11-06 14:01:52.799610] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.657 [2024-11-06 14:01:52.888978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.657 [2024-11-06 14:01:52.918513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.657 [2024-11-06 14:01:52.918543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.657 [2024-11-06 14:01:52.918548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.657 [2024-11-06 14:01:52.918553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.657 [2024-11-06 14:01:52.918557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.657 [2024-11-06 14:01:52.919034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.600 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:07.600 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:07.600 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.600 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:07.600 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.600 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.600 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.gBlnJbMHZu 00:20:07.600 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:07.600 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.gBlnJbMHZu 00:20:07.600 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:07.601 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:07.601 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:07.601 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:07.601 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.gBlnJbMHZu 00:20:07.601 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gBlnJbMHZu 00:20:07.601 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:07.601 [2024-11-06 14:01:53.779836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.601 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:07.860 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:07.861 [2024-11-06 14:01:54.116667] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:07.861 [2024-11-06 14:01:54.116866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.861 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:08.121 malloc0 00:20:08.121 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:08.381 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gBlnJbMHZu 00:20:08.381 [2024-11-06 14:01:54.611799] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gBlnJbMHZu': 0100666 00:20:08.381 [2024-11-06 14:01:54.611819] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:08.381 request: 00:20:08.381 { 00:20:08.381 "name": "key0", 00:20:08.381 "path": "/tmp/tmp.gBlnJbMHZu", 00:20:08.381 "method": "keyring_file_add_key", 00:20:08.381 "req_id": 1 00:20:08.381 } 00:20:08.381 Got JSON-RPC error response 00:20:08.381 response: 00:20:08.381 { 00:20:08.381 "code": -1, 00:20:08.381 "message": "Operation not permitted" 00:20:08.381 } 00:20:08.381 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:08.641 [2024-11-06 14:01:54.768213] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:08.641 [2024-11-06 14:01:54.768240] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:08.641 request: 00:20:08.641 { 00:20:08.641 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.641 "host": "nqn.2016-06.io.spdk:host1", 00:20:08.641 "psk": "key0", 00:20:08.641 "method": "nvmf_subsystem_add_host", 00:20:08.641 "req_id": 1 00:20:08.641 } 00:20:08.641 Got JSON-RPC error response 00:20:08.641 response: 00:20:08.641 { 00:20:08.641 "code": -32603, 00:20:08.641 "message": "Internal error" 00:20:08.641 } 00:20:08.641 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:08.641 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:08.641 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:08.641 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:08.641 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2429525 00:20:08.641 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2429525 ']' 00:20:08.641 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2429525 00:20:08.641 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:08.641 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:08.641 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2429525 00:20:08.641 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:08.641 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:08.641 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2429525' 00:20:08.641 killing process with pid 2429525 00:20:08.641 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2429525 00:20:08.641 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2429525 00:20:08.902 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.gBlnJbMHZu 00:20:08.902 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:08.902 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:08.902 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:08.902 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.902 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2430013 00:20:08.902 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2430013 00:20:08.902 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:08.902 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2430013 ']' 00:20:08.902 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.902 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:08.902 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.902 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:08.902 14:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.902 [2024-11-06 14:01:55.023982] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:20:08.902 [2024-11-06 14:01:55.024043] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.902 [2024-11-06 14:01:55.114248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.902 [2024-11-06 14:01:55.144073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.902 [2024-11-06 14:01:55.144096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.902 [2024-11-06 14:01:55.144102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.902 [2024-11-06 14:01:55.144107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.902 [2024-11-06 14:01:55.144111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.902 [2024-11-06 14:01:55.144572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.842 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:09.842 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:09.842 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:09.842 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:09.842 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.842 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.842 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.gBlnJbMHZu 00:20:09.842 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gBlnJbMHZu 00:20:09.842 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:09.842 [2024-11-06 14:01:56.001629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.842 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:10.102 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:10.102 [2024-11-06 14:01:56.326417] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:10.102 [2024-11-06 14:01:56.326615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.102 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:10.362 malloc0 00:20:10.362 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:10.622 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gBlnJbMHZu 00:20:10.622 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:10.882 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2430379 00:20:10.882 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:10.882 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:10.882 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2430379 /var/tmp/bdevperf.sock 00:20:10.882 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2430379 ']' 00:20:10.882 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.882 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:10.882 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.882 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:10.882 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.882 [2024-11-06 14:01:57.050279] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:20:10.882 [2024-11-06 14:01:57.050332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2430379 ] 00:20:10.882 [2024-11-06 14:01:57.133824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.142 [2024-11-06 14:01:57.163046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.714 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:11.714 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:11.714 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gBlnJbMHZu 00:20:11.714 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:11.975 [2024-11-06 14:01:58.130735] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.975 TLSTESTn1 00:20:11.975 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:12.236 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:12.236 "subsystems": [ 00:20:12.236 { 00:20:12.236 "subsystem": "keyring", 00:20:12.236 "config": [ 00:20:12.236 { 00:20:12.236 "method": "keyring_file_add_key", 00:20:12.236 "params": { 00:20:12.236 "name": "key0", 00:20:12.236 "path": "/tmp/tmp.gBlnJbMHZu" 00:20:12.236 } 00:20:12.236 } 00:20:12.236 ] 00:20:12.236 }, 00:20:12.236 { 00:20:12.236 "subsystem": "iobuf", 00:20:12.236 "config": [ 00:20:12.236 { 00:20:12.236 "method": "iobuf_set_options", 00:20:12.236 "params": { 00:20:12.236 "small_pool_count": 8192, 00:20:12.236 "large_pool_count": 1024, 00:20:12.236 "small_bufsize": 8192, 00:20:12.236 "large_bufsize": 135168, 00:20:12.236 "enable_numa": false 00:20:12.236 } 00:20:12.236 } 00:20:12.236 ] 00:20:12.236 }, 00:20:12.236 { 00:20:12.236 "subsystem": "sock", 00:20:12.236 "config": [ 00:20:12.236 { 00:20:12.236 "method": "sock_set_default_impl", 00:20:12.236 "params": { 00:20:12.236 "impl_name": "posix" 00:20:12.236 } 00:20:12.236 }, 00:20:12.236 { 00:20:12.236 "method": "sock_impl_set_options", 00:20:12.236 "params": { 00:20:12.236 "impl_name": "ssl", 00:20:12.236 "recv_buf_size": 4096, 00:20:12.236 "send_buf_size": 4096, 00:20:12.236 "enable_recv_pipe": true, 00:20:12.236 "enable_quickack": false, 00:20:12.236 "enable_placement_id": 0, 00:20:12.236 "enable_zerocopy_send_server": true, 00:20:12.236 "enable_zerocopy_send_client": false, 00:20:12.236 "zerocopy_threshold": 0, 00:20:12.236 "tls_version": 0, 00:20:12.236 "enable_ktls": false 00:20:12.236 } 00:20:12.236 }, 00:20:12.236 { 00:20:12.236 "method": "sock_impl_set_options", 00:20:12.236 "params": { 00:20:12.236 "impl_name": "posix", 00:20:12.236 "recv_buf_size": 2097152, 00:20:12.236 "send_buf_size": 2097152, 00:20:12.236 "enable_recv_pipe": true, 00:20:12.236 "enable_quickack": false, 00:20:12.236 "enable_placement_id": 0, 00:20:12.236 "enable_zerocopy_send_server": true, 00:20:12.237 "enable_zerocopy_send_client": false, 00:20:12.237 "zerocopy_threshold": 0, 00:20:12.237 "tls_version": 0, 00:20:12.237 "enable_ktls": false 00:20:12.237 } 00:20:12.237 } 00:20:12.237 ] 00:20:12.237 }, 00:20:12.237 { 00:20:12.237 "subsystem": "vmd", 00:20:12.237 "config": [] 00:20:12.237 }, 00:20:12.237 { 00:20:12.237 "subsystem": "accel", 00:20:12.237 "config": [ 00:20:12.237 { 00:20:12.237 "method": "accel_set_options", 00:20:12.237 "params": { 00:20:12.237 "small_cache_size": 128, 00:20:12.237 "large_cache_size": 16, 00:20:12.237 "task_count": 2048, 00:20:12.237 "sequence_count": 2048, 00:20:12.237 "buf_count": 2048 00:20:12.237 } 00:20:12.237 } 00:20:12.237 ] 00:20:12.237 }, 00:20:12.237 { 00:20:12.237 "subsystem": "bdev", 00:20:12.237 "config": [ 00:20:12.237 { 00:20:12.237 "method": "bdev_set_options", 00:20:12.237 "params": { 00:20:12.237 "bdev_io_pool_size": 65535, 00:20:12.237 "bdev_io_cache_size": 256, 00:20:12.237 "bdev_auto_examine": true, 00:20:12.237 "iobuf_small_cache_size": 128, 00:20:12.237 "iobuf_large_cache_size": 16 00:20:12.237 } 00:20:12.237 }, 00:20:12.237 { 00:20:12.237 "method": "bdev_raid_set_options", 00:20:12.237 "params": { 00:20:12.237 "process_window_size_kb": 1024, 00:20:12.237 "process_max_bandwidth_mb_sec": 0 00:20:12.237 } 00:20:12.237 }, 00:20:12.237 { 00:20:12.237 "method": "bdev_iscsi_set_options", 00:20:12.237 "params": { 00:20:12.237 "timeout_sec": 30 00:20:12.237 } 00:20:12.237 }, 00:20:12.237 { 00:20:12.237 "method": "bdev_nvme_set_options", 00:20:12.237 "params": { 00:20:12.237 "action_on_timeout": "none", 00:20:12.237 "timeout_us": 0, 00:20:12.237 "timeout_admin_us": 0, 00:20:12.237 "keep_alive_timeout_ms": 10000, 00:20:12.237 "arbitration_burst": 0, 00:20:12.237 "low_priority_weight": 0, 00:20:12.237 "medium_priority_weight": 0, 00:20:12.237 "high_priority_weight": 0, 00:20:12.237 "nvme_adminq_poll_period_us": 10000, 00:20:12.237 "nvme_ioq_poll_period_us": 0, 00:20:12.237 "io_queue_requests": 0, 00:20:12.237 "delay_cmd_submit": true, 00:20:12.237 "transport_retry_count": 4, 00:20:12.237 "bdev_retry_count": 3, 00:20:12.237 "transport_ack_timeout": 0, 00:20:12.237 "ctrlr_loss_timeout_sec": 0, 00:20:12.237 "reconnect_delay_sec": 0, 00:20:12.237 "fast_io_fail_timeout_sec": 0, 00:20:12.237 "disable_auto_failback": false, 00:20:12.237 "generate_uuids": false, 00:20:12.237 "transport_tos": 0, 00:20:12.237 "nvme_error_stat": false, 00:20:12.237 "rdma_srq_size": 0, 00:20:12.237 "io_path_stat": false, 00:20:12.237 "allow_accel_sequence": false, 00:20:12.237 "rdma_max_cq_size": 0, 00:20:12.237 "rdma_cm_event_timeout_ms": 0, 00:20:12.237 "dhchap_digests": [ 00:20:12.237 "sha256", 00:20:12.237 "sha384", 00:20:12.237 "sha512" 00:20:12.237 ], 00:20:12.237 "dhchap_dhgroups": [ 00:20:12.237 "null", 00:20:12.237 "ffdhe2048", 00:20:12.237 "ffdhe3072", 00:20:12.237 "ffdhe4096", 00:20:12.237 "ffdhe6144", 00:20:12.237 "ffdhe8192" 00:20:12.237 ] 00:20:12.237 } 00:20:12.237 }, 00:20:12.237 { 00:20:12.237 "method": "bdev_nvme_set_hotplug", 00:20:12.237 "params": { 00:20:12.237 "period_us": 100000, 00:20:12.237 "enable": false 00:20:12.237 } 00:20:12.237 }, 00:20:12.237 { 00:20:12.237 "method": "bdev_malloc_create", 00:20:12.237 "params": { 00:20:12.237 "name": "malloc0", 00:20:12.237 "num_blocks": 8192, 00:20:12.237 "block_size": 4096, 00:20:12.237 "physical_block_size": 4096, 00:20:12.237 "uuid": "e07a07a9-1423-46db-aeda-2229c9c8aaa6", 00:20:12.237 "optimal_io_boundary": 0, 00:20:12.237 "md_size": 0, 00:20:12.237 "dif_type": 0, 00:20:12.237 "dif_is_head_of_md": false, 00:20:12.237 "dif_pi_format": 0 00:20:12.237 } 00:20:12.237 }, 00:20:12.237 { 00:20:12.237 "method": "bdev_wait_for_examine" 00:20:12.237 } 00:20:12.237 ] 00:20:12.237 }, 00:20:12.237 { 00:20:12.237 "subsystem": "nbd", 00:20:12.237 "config": [] 00:20:12.237 }, 00:20:12.237 { 00:20:12.237 "subsystem": "scheduler", 00:20:12.237 "config": [ 00:20:12.237 { 00:20:12.237 "method": "framework_set_scheduler", 00:20:12.237 "params": { 00:20:12.237 "name": "static" 00:20:12.237 } 00:20:12.237 } 00:20:12.237 ] 00:20:12.237 }, 00:20:12.237 { 00:20:12.237 "subsystem": "nvmf", 00:20:12.237 "config": [ 00:20:12.237 { 00:20:12.237 "method": "nvmf_set_config", 00:20:12.237 "params": { 00:20:12.237 "discovery_filter": "match_any", 00:20:12.237 "admin_cmd_passthru": { 00:20:12.237 "identify_ctrlr": false 00:20:12.237 }, 00:20:12.237 "dhchap_digests": [ 00:20:12.237 "sha256", 00:20:12.237 "sha384", 00:20:12.237 "sha512" 00:20:12.237 ], 00:20:12.237 "dhchap_dhgroups": [ 00:20:12.237 "null", 00:20:12.237 "ffdhe2048", 00:20:12.237 "ffdhe3072", 00:20:12.237 "ffdhe4096", 00:20:12.237 "ffdhe6144", 00:20:12.237 "ffdhe8192" 00:20:12.237 ] 00:20:12.237 } 00:20:12.237 }, 00:20:12.237 { 00:20:12.237 "method": "nvmf_set_max_subsystems", 00:20:12.237 "params": { 00:20:12.237 "max_subsystems": 1024 00:20:12.237 } 00:20:12.237 }, 00:20:12.237 { 00:20:12.237 "method": "nvmf_set_crdt", 00:20:12.237 "params": { 00:20:12.237 "crdt1": 0, 00:20:12.237 "crdt2": 0, 00:20:12.237 "crdt3": 0 00:20:12.237 } 00:20:12.237 }, 00:20:12.237 { 00:20:12.237 "method": "nvmf_create_transport", 00:20:12.237 "params": { 00:20:12.237 "trtype": "TCP", 00:20:12.237 "max_queue_depth": 128, 00:20:12.237 "max_io_qpairs_per_ctrlr": 127, 00:20:12.237 "in_capsule_data_size": 4096, 00:20:12.237 "max_io_size": 131072, 00:20:12.237 "io_unit_size": 131072, 00:20:12.237 "max_aq_depth": 128, 00:20:12.237 "num_shared_buffers": 511, 00:20:12.237 "buf_cache_size": 4294967295, 00:20:12.237 "dif_insert_or_strip": false, 00:20:12.237 "zcopy": false, 00:20:12.237 "c2h_success": false, 00:20:12.237 "sock_priority": 0, 00:20:12.237 "abort_timeout_sec": 1, 00:20:12.237 "ack_timeout": 0, 00:20:12.237 "data_wr_pool_size": 0 00:20:12.237 } 00:20:12.237 }, 00:20:12.237 { 00:20:12.237 "method": "nvmf_create_subsystem", 00:20:12.237 "params": { 00:20:12.237 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.237 "allow_any_host": false, 00:20:12.238 "serial_number": "SPDK00000000000001", 00:20:12.238 "model_number": "SPDK bdev Controller", 00:20:12.238 "max_namespaces": 10, 00:20:12.238 "min_cntlid": 1, 00:20:12.238 "max_cntlid": 65519, 00:20:12.238 "ana_reporting": false 00:20:12.238 } 00:20:12.238 }, 00:20:12.238 { 00:20:12.238 "method": "nvmf_subsystem_add_host", 00:20:12.238 "params": { 00:20:12.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.238 "host": "nqn.2016-06.io.spdk:host1", 00:20:12.238 "psk": "key0" 00:20:12.238 } 00:20:12.238 }, 00:20:12.238 { 00:20:12.238 "method": "nvmf_subsystem_add_ns", 00:20:12.238 "params": { 00:20:12.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.238 "namespace": { 00:20:12.238 "nsid": 1, 00:20:12.238 "bdev_name": "malloc0", 00:20:12.238 "nguid": "E07A07A9142346DBAEDA2229C9C8AAA6", 00:20:12.238 "uuid": "e07a07a9-1423-46db-aeda-2229c9c8aaa6", 00:20:12.238 "no_auto_visible": false 00:20:12.238 } 00:20:12.238 } 00:20:12.238 }, 00:20:12.238 { 00:20:12.238 "method": "nvmf_subsystem_add_listener", 00:20:12.238 "params": { 00:20:12.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.238 "listen_address": { 00:20:12.238 "trtype": "TCP", 00:20:12.238 "adrfam": "IPv4", 00:20:12.238 "traddr": "10.0.0.2", 00:20:12.238 "trsvcid": "4420" 00:20:12.238 }, 00:20:12.238 "secure_channel": true 00:20:12.238 } 00:20:12.238 } 00:20:12.238 ] 00:20:12.238 } 00:20:12.238 ] 00:20:12.238 }' 00:20:12.238 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:12.499 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:12.499 "subsystems": [ 00:20:12.499 { 00:20:12.499 "subsystem": "keyring", 00:20:12.499 "config": [ 00:20:12.499 { 00:20:12.499 "method": "keyring_file_add_key", 00:20:12.499 "params": { 00:20:12.499 "name": "key0", 00:20:12.499 "path": "/tmp/tmp.gBlnJbMHZu" 00:20:12.499 } 00:20:12.499 } 00:20:12.499 ] 00:20:12.499 }, 00:20:12.499 { 00:20:12.499 "subsystem": "iobuf", 00:20:12.499 "config": [ 00:20:12.499 { 00:20:12.499 "method": "iobuf_set_options", 00:20:12.499 "params": { 00:20:12.499 "small_pool_count": 8192, 00:20:12.499 "large_pool_count": 1024, 00:20:12.499 "small_bufsize": 8192, 00:20:12.499 "large_bufsize": 135168, 00:20:12.499 "enable_numa": false 00:20:12.499 } 00:20:12.499 } 00:20:12.499 ] 00:20:12.499 }, 00:20:12.499 { 00:20:12.499 "subsystem": "sock", 00:20:12.499 "config": [ 00:20:12.499 { 00:20:12.499 "method": "sock_set_default_impl", 00:20:12.499 "params": { 00:20:12.499 "impl_name": "posix" 00:20:12.499 } 00:20:12.499 }, 00:20:12.499 { 00:20:12.499 "method": "sock_impl_set_options", 00:20:12.499 "params": { 00:20:12.499 "impl_name": "ssl", 00:20:12.499 "recv_buf_size": 4096, 00:20:12.499 "send_buf_size": 4096, 00:20:12.499 "enable_recv_pipe": true, 00:20:12.499 "enable_quickack": false, 00:20:12.499 "enable_placement_id": 0, 00:20:12.499 "enable_zerocopy_send_server": true, 00:20:12.499 "enable_zerocopy_send_client": false, 00:20:12.499 "zerocopy_threshold": 0, 00:20:12.499 "tls_version": 0, 00:20:12.499 "enable_ktls": false 00:20:12.499 } 00:20:12.499 }, 00:20:12.499 { 00:20:12.499 "method": "sock_impl_set_options", 00:20:12.499 "params": { 00:20:12.499 "impl_name": "posix", 00:20:12.499 "recv_buf_size": 2097152, 00:20:12.499 "send_buf_size": 2097152, 00:20:12.499 "enable_recv_pipe": true, 00:20:12.499 "enable_quickack": false, 00:20:12.499 "enable_placement_id": 0, 00:20:12.499 "enable_zerocopy_send_server": true, 00:20:12.499 "enable_zerocopy_send_client": false, 00:20:12.499 "zerocopy_threshold": 0, 00:20:12.499 "tls_version": 0, 00:20:12.499 "enable_ktls": false 00:20:12.499 } 00:20:12.499 } 00:20:12.499 ] 00:20:12.499 }, 00:20:12.499 { 00:20:12.499 "subsystem": "vmd", 00:20:12.499 "config": [] 00:20:12.499 }, 00:20:12.499 { 00:20:12.499 "subsystem": "accel", 00:20:12.499 "config": [ 00:20:12.499 { 00:20:12.499 "method": "accel_set_options", 00:20:12.499 "params": { 00:20:12.499 "small_cache_size": 128, 00:20:12.499 "large_cache_size": 16, 00:20:12.499 "task_count": 2048, 00:20:12.499 "sequence_count": 2048, 00:20:12.499 "buf_count": 2048 00:20:12.499 } 00:20:12.499 } 00:20:12.499 ] 00:20:12.499 }, 00:20:12.499 { 00:20:12.499 "subsystem": "bdev", 00:20:12.499 "config": [ 00:20:12.499 { 00:20:12.499 "method": "bdev_set_options", 00:20:12.499 "params": { 00:20:12.499 "bdev_io_pool_size": 65535, 00:20:12.499 "bdev_io_cache_size": 256, 00:20:12.499 "bdev_auto_examine": true, 00:20:12.499 "iobuf_small_cache_size": 128, 00:20:12.499 "iobuf_large_cache_size": 16 00:20:12.499 } 00:20:12.499 }, 00:20:12.499 { 00:20:12.499 "method": "bdev_raid_set_options", 00:20:12.499 "params": { 00:20:12.499 "process_window_size_kb": 1024, 00:20:12.499 "process_max_bandwidth_mb_sec": 0 00:20:12.499 } 00:20:12.499 }, 00:20:12.499 { 00:20:12.499 "method": "bdev_iscsi_set_options", 00:20:12.499 "params": { 00:20:12.499 "timeout_sec": 30 00:20:12.499 } 00:20:12.499 }, 00:20:12.499 { 00:20:12.499 "method": "bdev_nvme_set_options", 00:20:12.499 "params": { 00:20:12.499 "action_on_timeout": "none", 00:20:12.499 "timeout_us": 0, 00:20:12.499 "timeout_admin_us": 0, 00:20:12.499 "keep_alive_timeout_ms": 10000, 00:20:12.499 "arbitration_burst": 0, 00:20:12.499 "low_priority_weight": 0, 00:20:12.499 "medium_priority_weight": 0, 00:20:12.499 "high_priority_weight": 0, 00:20:12.499 "nvme_adminq_poll_period_us": 10000, 00:20:12.499 "nvme_ioq_poll_period_us": 0, 00:20:12.499 "io_queue_requests": 512, 00:20:12.499 "delay_cmd_submit": true, 00:20:12.499 "transport_retry_count": 4, 00:20:12.499 "bdev_retry_count": 3, 00:20:12.499 "transport_ack_timeout": 0, 00:20:12.499 "ctrlr_loss_timeout_sec": 0, 00:20:12.499 "reconnect_delay_sec": 0, 00:20:12.499 "fast_io_fail_timeout_sec": 0, 00:20:12.499 "disable_auto_failback": false, 00:20:12.499 "generate_uuids": false, 00:20:12.499 "transport_tos": 0, 00:20:12.499 "nvme_error_stat": false, 00:20:12.499 "rdma_srq_size": 0, 00:20:12.499 "io_path_stat": false, 00:20:12.499 "allow_accel_sequence": false, 00:20:12.499 "rdma_max_cq_size": 0, 00:20:12.499 "rdma_cm_event_timeout_ms": 0, 00:20:12.499 "dhchap_digests": [ 00:20:12.499 "sha256", 00:20:12.499 "sha384", 00:20:12.499 "sha512" 00:20:12.499 ], 00:20:12.499 "dhchap_dhgroups": [ 00:20:12.499 "null", 00:20:12.499 "ffdhe2048", 00:20:12.499 "ffdhe3072", 00:20:12.499 "ffdhe4096", 00:20:12.499 "ffdhe6144", 00:20:12.499 "ffdhe8192" 00:20:12.499 ] 00:20:12.499 } 00:20:12.499 }, 00:20:12.499 { 00:20:12.499 "method": "bdev_nvme_attach_controller", 00:20:12.499 "params": { 00:20:12.499 "name": "TLSTEST", 00:20:12.499 "trtype": "TCP", 00:20:12.499 "adrfam": "IPv4", 00:20:12.499 "traddr": "10.0.0.2", 00:20:12.499 "trsvcid": "4420", 00:20:12.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.499 "prchk_reftag": false, 00:20:12.499 "prchk_guard": false, 00:20:12.499 "ctrlr_loss_timeout_sec": 0, 00:20:12.499 "reconnect_delay_sec": 0, 00:20:12.499 "fast_io_fail_timeout_sec": 0, 00:20:12.499 "psk": "key0", 00:20:12.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:12.499 "hdgst": false, 00:20:12.499 "ddgst": false, 00:20:12.499 "multipath": "multipath" 00:20:12.499 } 00:20:12.499 }, 00:20:12.499 { 00:20:12.499 "method": "bdev_nvme_set_hotplug", 00:20:12.499 "params": { 00:20:12.499 "period_us": 100000, 00:20:12.499 "enable": false 00:20:12.499 } 00:20:12.499 }, 00:20:12.499 { 00:20:12.499 "method": "bdev_wait_for_examine" 00:20:12.499 } 00:20:12.499 ] 00:20:12.499 }, 00:20:12.499 { 00:20:12.499 "subsystem": "nbd", 00:20:12.499 "config": [] 00:20:12.499 } 00:20:12.499 ] 00:20:12.499 }' 00:20:12.499 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2430379 00:20:12.499 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2430379 ']' 00:20:12.499 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2430379 00:20:12.499 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:12.499 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:12.499 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2430379 00:20:12.760 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:12.760 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:12.761 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2430379' 00:20:12.761 killing process with pid 2430379 00:20:12.761 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2430379 00:20:12.761 Received shutdown signal, test time was about 10.000000 seconds 00:20:12.761 00:20:12.761 Latency(us) 00:20:12.761 [2024-11-06T13:01:59.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.761 [2024-11-06T13:01:59.041Z] =================================================================================================================== 00:20:12.761 [2024-11-06T13:01:59.041Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:12.761 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2430379 00:20:12.761 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2430013 00:20:12.761 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2430013 ']' 00:20:12.761 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2430013 00:20:12.761 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:12.761 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:12.761 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2430013 00:20:12.761 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:12.761 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:12.761 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2430013' 00:20:12.761 killing process with pid 2430013 00:20:12.761 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2430013 00:20:12.761 14:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2430013 00:20:13.023 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:13.023 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.023 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:13.023 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.023 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:13.023 "subsystems": [ 00:20:13.023 { 00:20:13.023 "subsystem": "keyring", 00:20:13.023 "config": [ 00:20:13.023 { 00:20:13.023 "method": "keyring_file_add_key", 00:20:13.023 "params": { 00:20:13.023 "name": "key0", 00:20:13.023 "path": "/tmp/tmp.gBlnJbMHZu" 00:20:13.023 } 00:20:13.023 } 00:20:13.023 ] 00:20:13.023 }, 00:20:13.023 { 00:20:13.023 "subsystem": "iobuf", 00:20:13.023 "config": [ 00:20:13.023 { 00:20:13.023 "method": "iobuf_set_options", 00:20:13.023 "params": { 00:20:13.023 "small_pool_count": 8192, 00:20:13.023 "large_pool_count": 1024, 00:20:13.023 "small_bufsize": 8192, 00:20:13.023 "large_bufsize": 135168, 00:20:13.023 "enable_numa": false 00:20:13.023 } 00:20:13.023 } 00:20:13.023 ] 00:20:13.023 }, 00:20:13.023 { 00:20:13.023 "subsystem": "sock", 00:20:13.023 "config": [ 00:20:13.023 { 00:20:13.023 "method": "sock_set_default_impl", 00:20:13.023 "params": { 00:20:13.023 "impl_name": "posix" 00:20:13.023 } 00:20:13.023 }, 00:20:13.023 { 00:20:13.023 "method": "sock_impl_set_options", 00:20:13.023 "params": { 00:20:13.023 "impl_name": "ssl", 00:20:13.023 "recv_buf_size": 4096, 00:20:13.023 "send_buf_size": 4096, 00:20:13.023 "enable_recv_pipe": true, 00:20:13.023 "enable_quickack": false, 00:20:13.023 "enable_placement_id": 0, 00:20:13.023 "enable_zerocopy_send_server": true, 00:20:13.023 "enable_zerocopy_send_client": false, 00:20:13.023 "zerocopy_threshold": 0, 00:20:13.023 "tls_version": 0, 00:20:13.023 "enable_ktls": false 00:20:13.023 } 00:20:13.023 }, 00:20:13.023 { 00:20:13.023 "method": "sock_impl_set_options", 00:20:13.023 "params": { 00:20:13.023 "impl_name": "posix", 00:20:13.023 "recv_buf_size": 2097152, 00:20:13.023 "send_buf_size": 2097152, 00:20:13.023 "enable_recv_pipe": true, 00:20:13.023 "enable_quickack": false, 00:20:13.023 "enable_placement_id": 0, 00:20:13.023 "enable_zerocopy_send_server": true, 00:20:13.023 "enable_zerocopy_send_client": false, 00:20:13.023 "zerocopy_threshold": 0, 00:20:13.023 "tls_version": 0, 00:20:13.023 "enable_ktls": false 00:20:13.023 } 00:20:13.023 } 00:20:13.023 ] 00:20:13.023 }, 00:20:13.023 { 00:20:13.023 "subsystem": "vmd", 00:20:13.023 "config": [] 00:20:13.023 }, 00:20:13.023 { 00:20:13.023 "subsystem": "accel", 00:20:13.023 "config": [ 00:20:13.023 { 00:20:13.023 "method": "accel_set_options", 00:20:13.023 "params": { 00:20:13.023 "small_cache_size": 128, 00:20:13.023 "large_cache_size": 16, 00:20:13.023 "task_count": 2048, 00:20:13.023 "sequence_count": 2048, 00:20:13.023 "buf_count": 2048 00:20:13.023 } 00:20:13.023 } 00:20:13.023 ] 00:20:13.023 }, 00:20:13.023 { 00:20:13.023 "subsystem": "bdev", 00:20:13.023 "config": [ 00:20:13.023 { 00:20:13.024 "method": "bdev_set_options", 00:20:13.024 "params": { 00:20:13.024 "bdev_io_pool_size": 65535, 00:20:13.024 "bdev_io_cache_size": 256, 00:20:13.024 "bdev_auto_examine": true, 00:20:13.024 "iobuf_small_cache_size": 128, 00:20:13.024 "iobuf_large_cache_size": 16 00:20:13.024 } 00:20:13.024 }, 00:20:13.024 { 00:20:13.024 "method": "bdev_raid_set_options", 00:20:13.024 "params": { 00:20:13.024 "process_window_size_kb": 1024, 00:20:13.024 "process_max_bandwidth_mb_sec": 0 00:20:13.024 } 00:20:13.024 }, 00:20:13.024 { 00:20:13.024 "method": "bdev_iscsi_set_options", 00:20:13.024 "params": { 00:20:13.024 "timeout_sec": 30 00:20:13.024 } 00:20:13.024 }, 00:20:13.024 { 00:20:13.024 "method": "bdev_nvme_set_options", 00:20:13.024 "params": { 00:20:13.024 "action_on_timeout": "none", 00:20:13.024 "timeout_us": 0, 00:20:13.024 "timeout_admin_us": 0, 00:20:13.024 "keep_alive_timeout_ms": 10000, 00:20:13.024 "arbitration_burst": 0, 00:20:13.024 "low_priority_weight": 0, 00:20:13.024 "medium_priority_weight": 0, 00:20:13.024 "high_priority_weight": 0, 00:20:13.024 "nvme_adminq_poll_period_us": 10000, 00:20:13.024 "nvme_ioq_poll_period_us": 0, 00:20:13.024 "io_queue_requests": 0, 00:20:13.024 "delay_cmd_submit": true, 00:20:13.024 "transport_retry_count": 4, 00:20:13.024 "bdev_retry_count": 3, 00:20:13.024 "transport_ack_timeout": 0, 00:20:13.024 "ctrlr_loss_timeout_sec": 0, 00:20:13.024 "reconnect_delay_sec": 0, 00:20:13.024 "fast_io_fail_timeout_sec": 0, 00:20:13.024 "disable_auto_failback": false, 00:20:13.024 "generate_uuids": false, 00:20:13.024 "transport_tos": 0, 00:20:13.024 "nvme_error_stat": false, 00:20:13.024 "rdma_srq_size": 0, 00:20:13.024 "io_path_stat": false, 00:20:13.024 "allow_accel_sequence": false, 00:20:13.024 "rdma_max_cq_size": 0, 00:20:13.024 "rdma_cm_event_timeout_ms": 0, 00:20:13.024 "dhchap_digests": [ 00:20:13.024 "sha256", 00:20:13.024 "sha384", 00:20:13.024 "sha512" 00:20:13.024 ], 00:20:13.024 "dhchap_dhgroups": [ 00:20:13.024 "null", 00:20:13.024 "ffdhe2048", 00:20:13.024 "ffdhe3072", 00:20:13.024 "ffdhe4096", 00:20:13.024 "ffdhe6144", 00:20:13.024 "ffdhe8192" 00:20:13.024 ] 00:20:13.024 } 00:20:13.024 }, 00:20:13.024 { 00:20:13.024 "method": "bdev_nvme_set_hotplug", 00:20:13.024 "params": { 00:20:13.024 "period_us": 100000, 00:20:13.024 "enable": false 00:20:13.024 } 00:20:13.024 }, 00:20:13.024 { 00:20:13.024 "method": "bdev_malloc_create", 00:20:13.024 "params": { 00:20:13.024 "name": "malloc0", 00:20:13.024 "num_blocks": 8192, 00:20:13.024 "block_size": 4096, 00:20:13.024 "physical_block_size": 4096, 00:20:13.024 "uuid": "e07a07a9-1423-46db-aeda-2229c9c8aaa6", 00:20:13.024 "optimal_io_boundary": 0, 00:20:13.024 "md_size": 0, 00:20:13.024 "dif_type": 0, 00:20:13.024 "dif_is_head_of_md": false, 00:20:13.024 "dif_pi_format": 0 00:20:13.024 } 00:20:13.024 }, 00:20:13.024 { 00:20:13.024 "method": "bdev_wait_for_examine" 00:20:13.024 } 00:20:13.024 ] 00:20:13.024 }, 00:20:13.024 { 00:20:13.024 "subsystem": "nbd", 00:20:13.024 "config": [] 00:20:13.024 }, 00:20:13.024 { 00:20:13.024 "subsystem": "scheduler", 00:20:13.024 "config": [ 00:20:13.024 { 00:20:13.024 "method": "framework_set_scheduler", 00:20:13.024 "params": { 00:20:13.024 "name": "static" 00:20:13.024 } 00:20:13.024 } 00:20:13.024 ] 00:20:13.024 }, 00:20:13.024 { 00:20:13.024 "subsystem": "nvmf", 00:20:13.024 "config": [ 00:20:13.024 { 00:20:13.024 "method": "nvmf_set_config", 00:20:13.024 "params": { 00:20:13.024 "discovery_filter": "match_any", 00:20:13.024 "admin_cmd_passthru": { 00:20:13.024 "identify_ctrlr": false 00:20:13.024 }, 00:20:13.024 "dhchap_digests": [ 00:20:13.024 "sha256", 00:20:13.024 "sha384", 00:20:13.024 "sha512" 00:20:13.024 ], 00:20:13.024 "dhchap_dhgroups": [ 00:20:13.024 "null", 00:20:13.024 "ffdhe2048", 00:20:13.024 "ffdhe3072", 00:20:13.024 "ffdhe4096", 00:20:13.024 "ffdhe6144", 00:20:13.024 "ffdhe8192" 00:20:13.024 ] 00:20:13.024 } 00:20:13.024 }, 00:20:13.024 { 00:20:13.024 "method": "nvmf_set_max_subsystems", 00:20:13.024 "params": { 00:20:13.024 "max_subsystems": 1024 00:20:13.024 } 00:20:13.024 }, 00:20:13.024 { 00:20:13.024 "method": "nvmf_set_crdt", 00:20:13.024 "params": { 00:20:13.024 "crdt1": 0, 00:20:13.024 "crdt2": 0, 00:20:13.024 "crdt3": 0 00:20:13.024 } 00:20:13.024 }, 00:20:13.024 { 00:20:13.024 "method": "nvmf_create_transport", 00:20:13.024 "params": { 00:20:13.024 "trtype": "TCP", 00:20:13.024 "max_queue_depth": 128, 00:20:13.024 "max_io_qpairs_per_ctrlr": 127, 00:20:13.024 "in_capsule_data_size": 4096, 00:20:13.024 "max_io_size": 131072, 00:20:13.024 "io_unit_size": 131072, 00:20:13.024 "max_aq_depth": 128, 00:20:13.024 "num_shared_buffers": 511, 00:20:13.024 "buf_cache_size": 4294967295, 00:20:13.024 "dif_insert_or_strip": false, 00:20:13.024 "zcopy": false, 00:20:13.024 "c2h_success": false, 00:20:13.024 "sock_priority": 0, 00:20:13.024 "abort_timeout_sec": 1, 00:20:13.024 "ack_timeout": 0, 00:20:13.024 "data_wr_pool_size": 0 00:20:13.024 } 00:20:13.024 }, 00:20:13.024 { 00:20:13.024 "method": "nvmf_create_subsystem", 00:20:13.024 "params": { 00:20:13.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.024 "allow_any_host": false, 00:20:13.024 "serial_number": "SPDK00000000000001", 00:20:13.024 "model_number": "SPDK bdev Controller", 00:20:13.024 "max_namespaces": 10, 00:20:13.024 "min_cntlid": 1, 00:20:13.024 "max_cntlid": 65519, 00:20:13.024 "ana_reporting": false 00:20:13.024 } 00:20:13.024 }, 00:20:13.024 { 00:20:13.024 "method": "nvmf_subsystem_add_host", 00:20:13.024 "params": { 00:20:13.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.024 "host": "nqn.2016-06.io.spdk:host1", 00:20:13.024 "psk": "key0" 00:20:13.024 } 00:20:13.024 }, 00:20:13.024 { 00:20:13.024 "method": "nvmf_subsystem_add_ns", 00:20:13.024 "params": { 00:20:13.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.024 "namespace": { 00:20:13.024 "nsid": 1, 00:20:13.024 "bdev_name": "malloc0", 00:20:13.024 "nguid": "E07A07A9142346DBAEDA2229C9C8AAA6", 00:20:13.024 "uuid": "e07a07a9-1423-46db-aeda-2229c9c8aaa6", 00:20:13.024 "no_auto_visible": false 00:20:13.024 } 00:20:13.024 } 00:20:13.024 }, 00:20:13.024 { 00:20:13.024 "method": "nvmf_subsystem_add_listener", 00:20:13.024 "params": { 00:20:13.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.024 "listen_address": { 00:20:13.024 "trtype": "TCP", 00:20:13.024 "adrfam": "IPv4", 00:20:13.024 "traddr": "10.0.0.2", 00:20:13.024 "trsvcid": "4420" 00:20:13.024 }, 00:20:13.024 "secure_channel": true 00:20:13.024 } 00:20:13.024 } 00:20:13.024 ] 00:20:13.024 } 00:20:13.024 ] 00:20:13.024 }' 00:20:13.024 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2430739 00:20:13.024 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2430739 00:20:13.024 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:13.024 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2430739 ']' 00:20:13.024 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.024 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:13.024 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.024 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:13.024 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.025 [2024-11-06 14:01:59.143729] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:20:13.025 [2024-11-06 14:01:59.143817] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.025 [2024-11-06 14:01:59.237748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.025 [2024-11-06 14:01:59.267637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.025 [2024-11-06 14:01:59.267663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.025 [2024-11-06 14:01:59.267668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.025 [2024-11-06 14:01:59.267673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.025 [2024-11-06 14:01:59.267677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.025 [2024-11-06 14:01:59.268160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.286 [2024-11-06 14:01:59.461966] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.286 [2024-11-06 14:01:59.493995] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:13.286 [2024-11-06 14:01:59.494185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.856 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:13.856 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:13.856 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:13.856 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:13.856 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.856 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.856 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2431084 00:20:13.856 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2431084 /var/tmp/bdevperf.sock 00:20:13.856 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2431084 ']' 00:20:13.856 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.856 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:13.856 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.856 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:13.856 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:13.856 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.856 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:13.856 "subsystems": [ 00:20:13.856 { 00:20:13.856 "subsystem": "keyring", 00:20:13.856 "config": [ 00:20:13.856 { 00:20:13.856 "method": "keyring_file_add_key", 00:20:13.856 "params": { 00:20:13.856 "name": "key0", 00:20:13.856 "path": "/tmp/tmp.gBlnJbMHZu" 00:20:13.856 } 00:20:13.856 } 00:20:13.856 ] 00:20:13.856 }, 00:20:13.856 { 00:20:13.856 "subsystem": "iobuf", 00:20:13.856 "config": [ 00:20:13.856 { 00:20:13.856 "method": "iobuf_set_options", 00:20:13.856 "params": { 00:20:13.856 "small_pool_count": 8192, 00:20:13.857 "large_pool_count": 1024, 00:20:13.857 "small_bufsize": 8192, 00:20:13.857 "large_bufsize": 135168, 00:20:13.857 "enable_numa": false 00:20:13.857 } 00:20:13.857 } 00:20:13.857 ] 00:20:13.857 }, 00:20:13.857 { 00:20:13.857 "subsystem": "sock", 00:20:13.857 "config": [ 00:20:13.857 { 00:20:13.857 "method": "sock_set_default_impl", 00:20:13.857 "params": { 00:20:13.857 "impl_name": "posix" 00:20:13.857 } 00:20:13.857 }, 00:20:13.857 { 00:20:13.857 "method": "sock_impl_set_options", 00:20:13.857 "params": { 00:20:13.857 "impl_name": "ssl", 00:20:13.857 "recv_buf_size": 4096, 00:20:13.857 "send_buf_size": 4096, 00:20:13.857 "enable_recv_pipe": true, 00:20:13.857 "enable_quickack": false, 00:20:13.857 "enable_placement_id": 0, 00:20:13.857 "enable_zerocopy_send_server": true, 00:20:13.857 "enable_zerocopy_send_client": false, 00:20:13.857 "zerocopy_threshold": 0, 00:20:13.857 "tls_version": 0, 00:20:13.857 "enable_ktls": false 00:20:13.857 } 00:20:13.857 }, 00:20:13.857 { 00:20:13.857 "method": "sock_impl_set_options", 00:20:13.857 "params": { 00:20:13.857 "impl_name": "posix", 00:20:13.857 "recv_buf_size": 2097152, 00:20:13.857 "send_buf_size": 2097152, 00:20:13.857 "enable_recv_pipe": true, 00:20:13.857 "enable_quickack": false, 00:20:13.857 "enable_placement_id": 0, 00:20:13.857 "enable_zerocopy_send_server": true, 00:20:13.857 "enable_zerocopy_send_client": false, 00:20:13.857 "zerocopy_threshold": 0, 00:20:13.857 "tls_version": 0, 00:20:13.857 "enable_ktls": false 00:20:13.857 } 00:20:13.857 } 00:20:13.857 ] 00:20:13.857 }, 00:20:13.857 { 00:20:13.857 "subsystem": "vmd", 00:20:13.857 "config": [] 00:20:13.857 }, 00:20:13.857 { 00:20:13.857 "subsystem": "accel", 00:20:13.857 "config": [ 00:20:13.857 { 00:20:13.857 "method": "accel_set_options", 00:20:13.857 "params": { 00:20:13.857 "small_cache_size": 128, 00:20:13.857 "large_cache_size": 16, 00:20:13.857 "task_count": 2048, 00:20:13.857 "sequence_count": 2048, 00:20:13.857 "buf_count": 2048 00:20:13.857 } 00:20:13.857 } 00:20:13.857 ] 00:20:13.857 }, 00:20:13.857 { 00:20:13.857 "subsystem": "bdev", 00:20:13.857 "config": [ 00:20:13.857 { 00:20:13.857 "method": "bdev_set_options", 00:20:13.857 "params": { 00:20:13.857 "bdev_io_pool_size": 65535, 00:20:13.857 "bdev_io_cache_size": 256, 00:20:13.857 "bdev_auto_examine": true, 00:20:13.857 "iobuf_small_cache_size": 128, 00:20:13.857 "iobuf_large_cache_size": 16 00:20:13.857 } 00:20:13.857 }, 00:20:13.857 { 00:20:13.857 "method": "bdev_raid_set_options", 00:20:13.857 "params": { 00:20:13.857 "process_window_size_kb": 1024, 00:20:13.857 "process_max_bandwidth_mb_sec": 0 00:20:13.857 } 00:20:13.857 }, 00:20:13.857 { 00:20:13.857 "method": "bdev_iscsi_set_options", 00:20:13.857 "params": { 00:20:13.857 "timeout_sec": 30 00:20:13.857 } 00:20:13.857 }, 00:20:13.857 { 00:20:13.857 "method": "bdev_nvme_set_options", 00:20:13.857 "params": { 00:20:13.857 "action_on_timeout": "none", 00:20:13.857 "timeout_us": 0, 00:20:13.857 "timeout_admin_us": 0, 00:20:13.857 "keep_alive_timeout_ms": 10000, 00:20:13.857 "arbitration_burst": 0, 00:20:13.857 "low_priority_weight": 0, 00:20:13.857 "medium_priority_weight": 0, 00:20:13.857 "high_priority_weight": 0, 00:20:13.857 "nvme_adminq_poll_period_us": 10000, 00:20:13.857 "nvme_ioq_poll_period_us": 0, 00:20:13.857 "io_queue_requests": 512, 00:20:13.857 "delay_cmd_submit": true, 00:20:13.857 "transport_retry_count": 4, 00:20:13.857 "bdev_retry_count": 3, 00:20:13.857 "transport_ack_timeout": 0, 00:20:13.857 "ctrlr_loss_timeout_sec": 0, 00:20:13.857 "reconnect_delay_sec": 0, 00:20:13.857 "fast_io_fail_timeout_sec": 0, 00:20:13.857 "disable_auto_failback": false, 00:20:13.857 "generate_uuids": false, 00:20:13.857 "transport_tos": 0, 00:20:13.857 "nvme_error_stat": false, 00:20:13.857 "rdma_srq_size": 0, 00:20:13.857 "io_path_stat": false, 00:20:13.857 "allow_accel_sequence": false, 00:20:13.857 "rdma_max_cq_size": 0, 00:20:13.857 "rdma_cm_event_timeout_ms": 0, 00:20:13.857 "dhchap_digests": [ 00:20:13.857 "sha256", 00:20:13.857 "sha384", 00:20:13.857 "sha512" 00:20:13.857 ], 00:20:13.857 "dhchap_dhgroups": [ 00:20:13.857 "null", 00:20:13.857 "ffdhe2048", 00:20:13.857 "ffdhe3072", 00:20:13.857 "ffdhe4096", 00:20:13.857 "ffdhe6144", 00:20:13.857 "ffdhe8192" 00:20:13.857 ] 00:20:13.857 } 00:20:13.857 }, 00:20:13.857 { 00:20:13.857 "method": "bdev_nvme_attach_controller", 00:20:13.857 "params": { 00:20:13.857 "name": "TLSTEST", 00:20:13.857 "trtype": "TCP", 00:20:13.857 "adrfam": "IPv4", 00:20:13.857 "traddr": "10.0.0.2", 00:20:13.857 "trsvcid": "4420", 00:20:13.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.857 "prchk_reftag": false, 00:20:13.857 "prchk_guard": false, 00:20:13.857 "ctrlr_loss_timeout_sec": 0, 00:20:13.857 "reconnect_delay_sec": 0, 00:20:13.857 "fast_io_fail_timeout_sec": 0, 00:20:13.857 "psk": "key0", 00:20:13.857 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:13.857 "hdgst": false, 00:20:13.857 "ddgst": false, 00:20:13.857 "multipath": "multipath" 00:20:13.857 } 00:20:13.857 }, 00:20:13.857 { 00:20:13.857 "method": "bdev_nvme_set_hotplug", 00:20:13.857 "params": { 00:20:13.857 "period_us": 100000, 00:20:13.857 "enable": false 00:20:13.857 } 00:20:13.857 }, 00:20:13.857 { 00:20:13.857 "method": "bdev_wait_for_examine" 00:20:13.857 } 00:20:13.857 ] 00:20:13.857 }, 00:20:13.857 { 00:20:13.857 "subsystem": "nbd", 00:20:13.857 "config": [] 00:20:13.857 } 00:20:13.857 ] 00:20:13.857 }' 00:20:13.857 [2024-11-06 14:02:00.023406] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:20:13.857 [2024-11-06 14:02:00.023520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431084 ] 00:20:13.857 [2024-11-06 14:02:00.116567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.118 [2024-11-06 14:02:00.146929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.118 [2024-11-06 14:02:00.282187] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.689 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:14.689 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:14.689 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:14.689 Running I/O for 10 seconds... 00:20:17.014 5282.00 IOPS, 20.63 MiB/s [2024-11-06T13:02:04.242Z] 5469.50 IOPS, 21.37 MiB/s [2024-11-06T13:02:05.285Z] 5406.33 IOPS, 21.12 MiB/s [2024-11-06T13:02:06.226Z] 5311.75 IOPS, 20.75 MiB/s [2024-11-06T13:02:07.166Z] 5409.20 IOPS, 21.13 MiB/s [2024-11-06T13:02:08.107Z] 5381.17 IOPS, 21.02 MiB/s [2024-11-06T13:02:09.049Z] 5456.00 IOPS, 21.31 MiB/s [2024-11-06T13:02:09.990Z] 5485.00 IOPS, 21.43 MiB/s [2024-11-06T13:02:10.933Z] 5541.67 IOPS, 21.65 MiB/s [2024-11-06T13:02:11.193Z] 5633.60 IOPS, 22.01 MiB/s 00:20:24.913 Latency(us) 00:20:24.913 [2024-11-06T13:02:11.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.913 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:24.913 Verification LBA range: start 0x0 length 0x2000 00:20:24.913 TLSTESTn1 : 10.01 5639.19 22.03 0.00 0.00 22666.33 5870.93 86944.43 00:20:24.913 [2024-11-06T13:02:11.193Z] =================================================================================================================== 00:20:24.913 [2024-11-06T13:02:11.193Z] Total : 5639.19 22.03 0.00 0.00 22666.33 5870.93 86944.43 00:20:24.913 { 00:20:24.913 "results": [ 00:20:24.913 { 00:20:24.913 "job": "TLSTESTn1", 00:20:24.913 "core_mask": "0x4", 00:20:24.913 "workload": "verify", 00:20:24.913 "status": "finished", 00:20:24.913 "verify_range": { 00:20:24.913 "start": 0, 00:20:24.913 "length": 8192 00:20:24.913 }, 00:20:24.913 "queue_depth": 128, 00:20:24.913 "io_size": 4096, 00:20:24.913 "runtime": 10.012781, 00:20:24.913 "iops": 5639.192548004396, 00:20:24.913 "mibps": 22.02809589064217, 00:20:24.913 "io_failed": 0, 00:20:24.913 "io_timeout": 0, 00:20:24.913 "avg_latency_us": 22666.332853499578, 00:20:24.913 "min_latency_us": 5870.933333333333, 00:20:24.913 "max_latency_us": 86944.42666666667 00:20:24.913 } 00:20:24.913 ], 00:20:24.913 "core_count": 1 00:20:24.913 } 00:20:24.913 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:24.913 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2431084 00:20:24.913 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2431084 ']' 00:20:24.913 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2431084 00:20:24.913 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:24.913 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:24.913 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2431084 00:20:24.913 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:24.913 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:24.913 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2431084' 00:20:24.913 killing process with pid 2431084 00:20:24.913 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2431084 00:20:24.913 Received shutdown signal, test time was about 10.000000 seconds 00:20:24.913 00:20:24.913 Latency(us) 00:20:24.913 [2024-11-06T13:02:11.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.913 [2024-11-06T13:02:11.193Z] =================================================================================================================== 00:20:24.913 [2024-11-06T13:02:11.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.913 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2431084 00:20:24.913 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2430739 00:20:24.913 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2430739 ']' 00:20:24.913 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2430739 00:20:24.913 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:24.913 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:24.913 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2430739 00:20:25.174 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:25.174 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:25.174 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2430739' 00:20:25.174 killing process with pid 2430739 00:20:25.174 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2430739 00:20:25.174 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2430739 00:20:25.174 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:25.174 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:25.174 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:25.174 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.174 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2433117 00:20:25.174 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2433117 00:20:25.174 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:25.174 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2433117 ']' 00:20:25.174 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.174 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:25.174 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.174 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:25.174 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.174 [2024-11-06 14:02:11.371553] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:20:25.174 [2024-11-06 14:02:11.371606] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.436 [2024-11-06 14:02:11.466130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.436 [2024-11-06 14:02:11.504966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.436 [2024-11-06 14:02:11.505011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.436 [2024-11-06 14:02:11.505020] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.436 [2024-11-06 14:02:11.505027] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.436 [2024-11-06 14:02:11.505033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.436 [2024-11-06 14:02:11.505737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.009 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:26.009 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:26.009 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:26.009 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:26.009 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.009 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.009 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.gBlnJbMHZu 00:20:26.009 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gBlnJbMHZu 00:20:26.009 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:26.271 [2024-11-06 14:02:12.388664] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.271 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:26.532 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:26.532 [2024-11-06 14:02:12.765618] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:26.533 [2024-11-06 14:02:12.765946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.533 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:26.794 malloc0 00:20:26.794 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:27.055 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gBlnJbMHZu 00:20:27.316 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:27.316 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2433580 00:20:27.316 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:27.316 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:27.316 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2433580 /var/tmp/bdevperf.sock 00:20:27.316 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2433580 ']' 00:20:27.316 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.316 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:27.316 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.316 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:27.316 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.577 [2024-11-06 14:02:13.617021] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:20:27.577 [2024-11-06 14:02:13.617095] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433580 ] 00:20:27.577 [2024-11-06 14:02:13.708876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.577 [2024-11-06 14:02:13.743569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.520 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:28.520 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:28.520 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gBlnJbMHZu 00:20:28.520 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:28.520 [2024-11-06 14:02:14.762836] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:28.780 nvme0n1 00:20:28.780 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:28.780 Running I/O for 1 seconds... 00:20:29.722 4013.00 IOPS, 15.68 MiB/s 00:20:29.722 Latency(us) 00:20:29.722 [2024-11-06T13:02:16.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.722 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:29.722 Verification LBA range: start 0x0 length 0x2000 00:20:29.722 nvme0n1 : 1.02 4078.25 15.93 0.00 0.00 31146.58 5242.88 82138.45 00:20:29.722 [2024-11-06T13:02:16.002Z] =================================================================================================================== 00:20:29.722 [2024-11-06T13:02:16.002Z] Total : 4078.25 15.93 0.00 0.00 31146.58 5242.88 82138.45 00:20:29.722 { 00:20:29.722 "results": [ 00:20:29.722 { 00:20:29.722 "job": "nvme0n1", 00:20:29.722 "core_mask": "0x2", 00:20:29.722 "workload": "verify", 00:20:29.722 "status": "finished", 00:20:29.722 "verify_range": { 00:20:29.722 "start": 0, 00:20:29.722 "length": 8192 00:20:29.722 }, 00:20:29.722 "queue_depth": 128, 00:20:29.722 "io_size": 4096, 00:20:29.722 "runtime": 1.015631, 00:20:29.722 "iops": 4078.2528300140502, 00:20:29.722 "mibps": 15.930675117242384, 00:20:29.722 "io_failed": 0, 00:20:29.722 "io_timeout": 0, 00:20:29.722 "avg_latency_us": 31146.5793143409, 00:20:29.722 "min_latency_us": 5242.88, 00:20:29.722 "max_latency_us": 82138.45333333334 00:20:29.722 } 00:20:29.722 ], 00:20:29.722 "core_count": 1 00:20:29.722 } 00:20:29.722 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2433580 00:20:29.722 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2433580 ']' 00:20:29.722 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2433580 00:20:29.722 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:29.722 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:29.722 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2433580 00:20:29.983 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:29.983 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:29.983 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2433580' 00:20:29.983 killing process with pid 2433580 00:20:29.983 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2433580 00:20:29.983 Received shutdown signal, test time was about 1.000000 seconds 00:20:29.983 00:20:29.983 Latency(us) 00:20:29.983 [2024-11-06T13:02:16.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.983 [2024-11-06T13:02:16.263Z] =================================================================================================================== 00:20:29.983 [2024-11-06T13:02:16.263Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:29.983 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2433580 00:20:29.983 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2433117 00:20:29.983 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2433117 ']' 00:20:29.983 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2433117 00:20:29.983 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:29.983 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:29.983 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2433117 00:20:29.983 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:29.983 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:29.983 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2433117' 00:20:29.984 killing process with pid 2433117 00:20:29.984 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2433117 00:20:29.984 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2433117 00:20:30.244 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:30.244 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:30.244 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:30.244 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.244 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2434162 00:20:30.244 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2434162 00:20:30.244 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:30.244 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2434162 ']' 00:20:30.244 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.244 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:30.244 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.244 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:30.244 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.244 [2024-11-06 14:02:16.428123] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:20:30.244 [2024-11-06 14:02:16.428191] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.505 [2024-11-06 14:02:16.524841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.505 [2024-11-06 14:02:16.573576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.505 [2024-11-06 14:02:16.573628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.505 [2024-11-06 14:02:16.573636] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.505 [2024-11-06 14:02:16.573643] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.505 [2024-11-06 14:02:16.573649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.505 [2024-11-06 14:02:16.574446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.076 [2024-11-06 14:02:17.273514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.076 malloc0 00:20:31.076 [2024-11-06 14:02:17.300211] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:31.076 [2024-11-06 14:02:17.300433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2434415 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2434415 /var/tmp/bdevperf.sock 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2434415 ']' 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:31.076 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.337 [2024-11-06 14:02:17.379266] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:20:31.337 [2024-11-06 14:02:17.379324] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2434415 ] 00:20:31.337 [2024-11-06 14:02:17.465884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.337 [2024-11-06 14:02:17.497287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.276 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:32.276 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:32.276 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gBlnJbMHZu 00:20:32.276 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:32.276 [2024-11-06 14:02:18.526355] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:32.535 nvme0n1 00:20:32.535 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:32.535 Running I/O for 1 seconds... 00:20:33.484 5795.00 IOPS, 22.64 MiB/s 00:20:33.484 Latency(us) 00:20:33.484 [2024-11-06T13:02:19.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.484 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:33.484 Verification LBA range: start 0x0 length 0x2000 00:20:33.484 nvme0n1 : 1.01 5853.98 22.87 0.00 0.00 21726.17 4450.99 67720.53 00:20:33.484 [2024-11-06T13:02:19.764Z] =================================================================================================================== 00:20:33.484 [2024-11-06T13:02:19.764Z] Total : 5853.98 22.87 0.00 0.00 21726.17 4450.99 67720.53 00:20:33.484 { 00:20:33.484 "results": [ 00:20:33.484 { 00:20:33.484 "job": "nvme0n1", 00:20:33.484 "core_mask": "0x2", 00:20:33.484 "workload": "verify", 00:20:33.484 "status": "finished", 00:20:33.484 "verify_range": { 00:20:33.484 "start": 0, 00:20:33.484 "length": 8192 00:20:33.484 }, 00:20:33.484 "queue_depth": 128, 00:20:33.484 "io_size": 4096, 00:20:33.484 "runtime": 1.01179, 00:20:33.484 "iops": 5853.981557437808, 00:20:33.484 "mibps": 22.867115458741438, 00:20:33.484 "io_failed": 0, 00:20:33.484 "io_timeout": 0, 00:20:33.484 "avg_latency_us": 21726.166910912267, 00:20:33.484 "min_latency_us": 4450.986666666667, 00:20:33.484 "max_latency_us": 67720.53333333334 00:20:33.484 } 00:20:33.484 ], 00:20:33.484 "core_count": 1 00:20:33.484 } 00:20:33.484 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:33.484 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.484 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.750 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.750 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:33.750 "subsystems": [ 00:20:33.750 { 00:20:33.750 "subsystem": "keyring", 00:20:33.750 "config": [ 00:20:33.750 { 00:20:33.750 "method": "keyring_file_add_key", 00:20:33.750 "params": { 00:20:33.750 "name": "key0", 00:20:33.750 "path": "/tmp/tmp.gBlnJbMHZu" 00:20:33.750 } 00:20:33.750 } 00:20:33.750 ] 00:20:33.750 }, 00:20:33.750 { 00:20:33.750 "subsystem": "iobuf", 00:20:33.750 "config": [ 00:20:33.750 { 00:20:33.750 "method": "iobuf_set_options", 00:20:33.750 "params": { 00:20:33.750 "small_pool_count": 8192, 00:20:33.750 "large_pool_count": 1024, 00:20:33.750 "small_bufsize": 8192, 00:20:33.750 "large_bufsize": 135168, 00:20:33.750 "enable_numa": false 00:20:33.750 } 00:20:33.750 } 00:20:33.750 ] 00:20:33.750 }, 00:20:33.750 { 00:20:33.750 "subsystem": "sock", 00:20:33.750 "config": [ 00:20:33.750 { 00:20:33.750 "method": "sock_set_default_impl", 00:20:33.750 "params": { 00:20:33.750 "impl_name": "posix" 00:20:33.750 } 00:20:33.750 }, 00:20:33.750 { 00:20:33.750 "method": "sock_impl_set_options", 00:20:33.750 "params": { 00:20:33.750 "impl_name": "ssl", 00:20:33.750 "recv_buf_size": 4096, 00:20:33.750 "send_buf_size": 4096, 00:20:33.750 "enable_recv_pipe": true, 00:20:33.750 "enable_quickack": false, 00:20:33.750 "enable_placement_id": 0, 00:20:33.750 "enable_zerocopy_send_server": true, 00:20:33.750 "enable_zerocopy_send_client": false, 00:20:33.750 "zerocopy_threshold": 0, 00:20:33.750 "tls_version": 0, 00:20:33.750 "enable_ktls": false 00:20:33.750 } 00:20:33.750 }, 00:20:33.750 { 00:20:33.750 "method": "sock_impl_set_options", 00:20:33.750 "params": { 00:20:33.750 "impl_name": "posix", 00:20:33.750 "recv_buf_size": 2097152, 00:20:33.750 "send_buf_size": 2097152, 00:20:33.750 "enable_recv_pipe": true, 00:20:33.750 "enable_quickack": false, 00:20:33.750 "enable_placement_id": 0, 00:20:33.750 "enable_zerocopy_send_server": true, 00:20:33.750 "enable_zerocopy_send_client": false, 00:20:33.750 "zerocopy_threshold": 0, 00:20:33.750 "tls_version": 0, 00:20:33.750 "enable_ktls": false 00:20:33.750 } 00:20:33.750 } 00:20:33.750 ] 00:20:33.750 }, 00:20:33.750 { 00:20:33.750 "subsystem": "vmd", 00:20:33.750 "config": [] 00:20:33.750 }, 00:20:33.750 { 00:20:33.750 "subsystem": "accel", 00:20:33.750 "config": [ 00:20:33.750 { 00:20:33.750 "method": "accel_set_options", 00:20:33.750 "params": { 00:20:33.750 "small_cache_size": 128, 00:20:33.750 "large_cache_size": 16, 00:20:33.750 "task_count": 2048, 00:20:33.750 "sequence_count": 2048, 00:20:33.750 "buf_count": 2048 00:20:33.750 } 00:20:33.750 } 00:20:33.750 ] 00:20:33.750 }, 00:20:33.750 { 00:20:33.750 "subsystem": "bdev", 00:20:33.750 "config": [ 00:20:33.750 { 00:20:33.750 "method": "bdev_set_options", 00:20:33.750 "params": { 00:20:33.750 "bdev_io_pool_size": 65535, 00:20:33.750 "bdev_io_cache_size": 256, 00:20:33.750 "bdev_auto_examine": true, 00:20:33.750 "iobuf_small_cache_size": 128, 00:20:33.750 "iobuf_large_cache_size": 16 00:20:33.750 } 00:20:33.750 }, 00:20:33.750 { 00:20:33.750 "method": "bdev_raid_set_options", 00:20:33.750 "params": { 00:20:33.750 "process_window_size_kb": 1024, 00:20:33.750 "process_max_bandwidth_mb_sec": 0 00:20:33.750 } 00:20:33.750 }, 00:20:33.750 { 00:20:33.750 "method": "bdev_iscsi_set_options", 00:20:33.750 "params": { 00:20:33.750 "timeout_sec": 30 00:20:33.750 } 00:20:33.750 }, 00:20:33.750 { 00:20:33.750 "method": "bdev_nvme_set_options", 00:20:33.750 "params": { 00:20:33.750 "action_on_timeout": "none", 00:20:33.750 "timeout_us": 0, 00:20:33.750 "timeout_admin_us": 0, 00:20:33.750 "keep_alive_timeout_ms": 10000, 00:20:33.750 "arbitration_burst": 0, 00:20:33.750 "low_priority_weight": 0, 00:20:33.750 "medium_priority_weight": 0, 00:20:33.750 "high_priority_weight": 0, 00:20:33.750 "nvme_adminq_poll_period_us": 10000, 00:20:33.750 "nvme_ioq_poll_period_us": 0, 00:20:33.750 "io_queue_requests": 0, 00:20:33.750 "delay_cmd_submit": true, 00:20:33.750 "transport_retry_count": 4, 00:20:33.750 "bdev_retry_count": 3, 00:20:33.750 "transport_ack_timeout": 0, 00:20:33.750 "ctrlr_loss_timeout_sec": 0, 00:20:33.750 "reconnect_delay_sec": 0, 00:20:33.750 "fast_io_fail_timeout_sec": 0, 00:20:33.750 "disable_auto_failback": false, 00:20:33.750 "generate_uuids": false, 00:20:33.750 "transport_tos": 0, 00:20:33.750 "nvme_error_stat": false, 00:20:33.750 "rdma_srq_size": 0, 00:20:33.750 "io_path_stat": false, 00:20:33.750 "allow_accel_sequence": false, 00:20:33.750 "rdma_max_cq_size": 0, 00:20:33.750 "rdma_cm_event_timeout_ms": 0, 00:20:33.750 "dhchap_digests": [ 00:20:33.750 "sha256", 00:20:33.750 "sha384", 00:20:33.750 "sha512" 00:20:33.750 ], 00:20:33.751 "dhchap_dhgroups": [ 00:20:33.751 "null", 00:20:33.751 "ffdhe2048", 00:20:33.751 "ffdhe3072", 00:20:33.751 "ffdhe4096", 00:20:33.751 "ffdhe6144", 00:20:33.751 "ffdhe8192" 00:20:33.751 ] 00:20:33.751 } 00:20:33.751 }, 00:20:33.751 { 00:20:33.751 "method": "bdev_nvme_set_hotplug", 00:20:33.751 "params": { 00:20:33.751 "period_us": 100000, 00:20:33.751 "enable": false 00:20:33.751 } 00:20:33.751 }, 00:20:33.751 { 00:20:33.751 "method": "bdev_malloc_create", 00:20:33.751 "params": { 00:20:33.751 "name": "malloc0", 00:20:33.751 "num_blocks": 8192, 00:20:33.751 "block_size": 4096, 00:20:33.751 "physical_block_size": 4096, 00:20:33.751 "uuid": "2f873ca7-6c73-454b-ab7a-ee1c655c2329", 00:20:33.751 "optimal_io_boundary": 0, 00:20:33.751 "md_size": 0, 00:20:33.751 "dif_type": 0, 00:20:33.751 "dif_is_head_of_md": false, 00:20:33.751 "dif_pi_format": 0 00:20:33.751 } 00:20:33.751 }, 00:20:33.751 { 00:20:33.751 "method": "bdev_wait_for_examine" 00:20:33.751 } 00:20:33.751 ] 00:20:33.751 }, 00:20:33.751 { 00:20:33.751 "subsystem": "nbd", 00:20:33.751 "config": [] 00:20:33.751 }, 00:20:33.751 { 00:20:33.751 "subsystem": "scheduler", 00:20:33.751 "config": [ 00:20:33.751 { 00:20:33.751 "method": "framework_set_scheduler", 00:20:33.751 "params": { 00:20:33.751 "name": "static" 00:20:33.751 } 00:20:33.751 } 00:20:33.751 ] 00:20:33.751 }, 00:20:33.751 { 00:20:33.751 "subsystem": "nvmf", 00:20:33.751 "config": [ 00:20:33.751 { 00:20:33.751 "method": "nvmf_set_config", 00:20:33.751 "params": { 00:20:33.751 "discovery_filter": "match_any", 00:20:33.751 "admin_cmd_passthru": { 00:20:33.751 "identify_ctrlr": false 00:20:33.751 }, 00:20:33.751 "dhchap_digests": [ 00:20:33.751 "sha256", 00:20:33.751 "sha384", 00:20:33.751 "sha512" 00:20:33.751 ], 00:20:33.751 "dhchap_dhgroups": [ 00:20:33.751 "null", 00:20:33.751 "ffdhe2048", 00:20:33.751 "ffdhe3072", 00:20:33.751 "ffdhe4096", 00:20:33.751 "ffdhe6144", 00:20:33.751 "ffdhe8192" 00:20:33.751 ] 00:20:33.751 } 00:20:33.751 }, 00:20:33.751 { 00:20:33.751 "method": "nvmf_set_max_subsystems", 00:20:33.751 "params": { 00:20:33.751 "max_subsystems": 1024 00:20:33.751 } 00:20:33.751 }, 00:20:33.751 { 00:20:33.751 "method": "nvmf_set_crdt", 00:20:33.751 "params": { 00:20:33.751 "crdt1": 0, 00:20:33.751 "crdt2": 0, 00:20:33.751 "crdt3": 0 00:20:33.751 } 00:20:33.751 }, 00:20:33.751 { 00:20:33.751 "method": "nvmf_create_transport", 00:20:33.751 "params": { 00:20:33.751 "trtype": "TCP", 00:20:33.751 "max_queue_depth": 128, 00:20:33.751 "max_io_qpairs_per_ctrlr": 127, 00:20:33.751 "in_capsule_data_size": 4096, 00:20:33.751 "max_io_size": 131072, 00:20:33.751 "io_unit_size": 131072, 00:20:33.751 "max_aq_depth": 128, 00:20:33.751 "num_shared_buffers": 511, 00:20:33.751 "buf_cache_size": 4294967295, 00:20:33.751 "dif_insert_or_strip": false, 00:20:33.751 "zcopy": false, 00:20:33.751 "c2h_success": false, 00:20:33.751 "sock_priority": 0, 00:20:33.751 "abort_timeout_sec": 1, 00:20:33.751 "ack_timeout": 0, 00:20:33.751 "data_wr_pool_size": 0 00:20:33.751 } 00:20:33.751 }, 00:20:33.751 { 00:20:33.751 "method": "nvmf_create_subsystem", 00:20:33.751 "params": { 00:20:33.751 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.751 "allow_any_host": false, 00:20:33.751 "serial_number": "00000000000000000000", 00:20:33.751 "model_number": "SPDK bdev Controller", 00:20:33.751 "max_namespaces": 32, 00:20:33.751 "min_cntlid": 1, 00:20:33.751 "max_cntlid": 65519, 00:20:33.751 "ana_reporting": false 00:20:33.751 } 00:20:33.751 }, 00:20:33.751 { 00:20:33.751 "method": "nvmf_subsystem_add_host", 00:20:33.751 "params": { 00:20:33.751 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.751 "host": "nqn.2016-06.io.spdk:host1", 00:20:33.751 "psk": "key0" 00:20:33.751 } 00:20:33.751 }, 00:20:33.751 { 00:20:33.751 "method": "nvmf_subsystem_add_ns", 00:20:33.751 "params": { 00:20:33.751 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.751 "namespace": { 00:20:33.751 "nsid": 1, 00:20:33.751 "bdev_name": "malloc0", 00:20:33.751 "nguid": "2F873CA76C73454BAB7AEE1C655C2329", 00:20:33.751 "uuid": "2f873ca7-6c73-454b-ab7a-ee1c655c2329", 00:20:33.751 "no_auto_visible": false 00:20:33.751 } 00:20:33.751 } 00:20:33.751 }, 00:20:33.751 { 00:20:33.751 "method": "nvmf_subsystem_add_listener", 00:20:33.751 "params": { 00:20:33.751 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.751 "listen_address": { 00:20:33.751 "trtype": "TCP", 00:20:33.751 "adrfam": "IPv4", 00:20:33.751 "traddr": "10.0.0.2", 00:20:33.751 "trsvcid": "4420" 00:20:33.751 }, 00:20:33.751 "secure_channel": false, 00:20:33.751 "sock_impl": "ssl" 00:20:33.751 } 00:20:33.751 } 00:20:33.751 ] 00:20:33.751 } 00:20:33.751 ] 00:20:33.751 }' 00:20:33.751 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:34.011 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:34.011 "subsystems": [ 00:20:34.011 { 00:20:34.011 "subsystem": "keyring", 00:20:34.011 "config": [ 00:20:34.011 { 00:20:34.011 "method": "keyring_file_add_key", 00:20:34.011 "params": { 00:20:34.011 "name": "key0", 00:20:34.011 "path": "/tmp/tmp.gBlnJbMHZu" 00:20:34.011 } 00:20:34.011 } 00:20:34.011 ] 00:20:34.011 }, 00:20:34.011 { 00:20:34.011 "subsystem": "iobuf", 00:20:34.011 "config": [ 00:20:34.011 { 00:20:34.011 "method": "iobuf_set_options", 00:20:34.011 "params": { 00:20:34.011 "small_pool_count": 8192, 00:20:34.011 "large_pool_count": 1024, 00:20:34.011 "small_bufsize": 8192, 00:20:34.011 "large_bufsize": 135168, 00:20:34.011 "enable_numa": false 00:20:34.011 } 00:20:34.011 } 00:20:34.011 ] 00:20:34.011 }, 00:20:34.011 { 00:20:34.011 "subsystem": "sock", 00:20:34.011 "config": [ 00:20:34.011 { 00:20:34.011 "method": "sock_set_default_impl", 00:20:34.011 "params": { 00:20:34.011 "impl_name": "posix" 00:20:34.011 } 00:20:34.011 }, 00:20:34.011 { 00:20:34.011 "method": "sock_impl_set_options", 00:20:34.011 "params": { 00:20:34.011 "impl_name": "ssl", 00:20:34.011 "recv_buf_size": 4096, 00:20:34.011 "send_buf_size": 4096, 00:20:34.011 "enable_recv_pipe": true, 00:20:34.011 "enable_quickack": false, 00:20:34.011 "enable_placement_id": 0, 00:20:34.012 "enable_zerocopy_send_server": true, 00:20:34.012 "enable_zerocopy_send_client": false, 00:20:34.012 "zerocopy_threshold": 0, 00:20:34.012 "tls_version": 0, 00:20:34.012 "enable_ktls": false 00:20:34.012 } 00:20:34.012 }, 00:20:34.012 { 00:20:34.012 "method": "sock_impl_set_options", 00:20:34.012 "params": { 00:20:34.012 "impl_name": "posix", 00:20:34.012 "recv_buf_size": 2097152, 00:20:34.012 "send_buf_size": 2097152, 00:20:34.012 "enable_recv_pipe": true, 00:20:34.012 "enable_quickack": false, 00:20:34.012 "enable_placement_id": 0, 00:20:34.012 "enable_zerocopy_send_server": true, 00:20:34.012 "enable_zerocopy_send_client": false, 00:20:34.012 "zerocopy_threshold": 0, 00:20:34.012 "tls_version": 0, 00:20:34.012 "enable_ktls": false 00:20:34.012 } 00:20:34.012 } 00:20:34.012 ] 00:20:34.012 }, 00:20:34.012 { 00:20:34.012 "subsystem": "vmd", 00:20:34.012 "config": [] 00:20:34.012 }, 00:20:34.012 { 00:20:34.012 "subsystem": "accel", 00:20:34.012 "config": [ 00:20:34.012 { 00:20:34.012 "method": "accel_set_options", 00:20:34.012 "params": { 00:20:34.012 "small_cache_size": 128, 00:20:34.012 "large_cache_size": 16, 00:20:34.012 "task_count": 2048, 00:20:34.012 "sequence_count": 2048, 00:20:34.012 "buf_count": 2048 00:20:34.012 } 00:20:34.012 } 00:20:34.012 ] 00:20:34.012 }, 00:20:34.012 { 00:20:34.012 "subsystem": "bdev", 00:20:34.012 "config": [ 00:20:34.012 { 00:20:34.012 "method": "bdev_set_options", 00:20:34.012 "params": { 00:20:34.012 "bdev_io_pool_size": 65535, 00:20:34.012 "bdev_io_cache_size": 256, 00:20:34.012 "bdev_auto_examine": true, 00:20:34.012 "iobuf_small_cache_size": 128, 00:20:34.012 "iobuf_large_cache_size": 16 00:20:34.012 } 00:20:34.012 }, 00:20:34.012 { 00:20:34.012 "method": "bdev_raid_set_options", 00:20:34.012 "params": { 00:20:34.012 "process_window_size_kb": 1024, 00:20:34.012 "process_max_bandwidth_mb_sec": 0 00:20:34.012 } 00:20:34.012 }, 00:20:34.012 { 00:20:34.012 "method": "bdev_iscsi_set_options", 00:20:34.012 "params": { 00:20:34.012 "timeout_sec": 30 00:20:34.012 } 00:20:34.012 }, 00:20:34.012 { 00:20:34.012 "method": "bdev_nvme_set_options", 00:20:34.012 "params": { 00:20:34.012 "action_on_timeout": "none", 00:20:34.012 "timeout_us": 0, 00:20:34.012 "timeout_admin_us": 0, 00:20:34.012 "keep_alive_timeout_ms": 10000, 00:20:34.012 "arbitration_burst": 0, 00:20:34.012 "low_priority_weight": 0, 00:20:34.012 "medium_priority_weight": 0, 00:20:34.012 "high_priority_weight": 0, 00:20:34.012 "nvme_adminq_poll_period_us": 10000, 00:20:34.012 "nvme_ioq_poll_period_us": 0, 00:20:34.012 "io_queue_requests": 512, 00:20:34.012 "delay_cmd_submit": true, 00:20:34.012 "transport_retry_count": 4, 00:20:34.012 "bdev_retry_count": 3, 00:20:34.012 "transport_ack_timeout": 0, 00:20:34.012 "ctrlr_loss_timeout_sec": 0, 00:20:34.012 "reconnect_delay_sec": 0, 00:20:34.012 "fast_io_fail_timeout_sec": 0, 00:20:34.012 "disable_auto_failback": false, 00:20:34.012 "generate_uuids": false, 00:20:34.012 "transport_tos": 0, 00:20:34.012 "nvme_error_stat": false, 00:20:34.012 "rdma_srq_size": 0, 00:20:34.012 "io_path_stat": false, 00:20:34.012 "allow_accel_sequence": false, 00:20:34.012 "rdma_max_cq_size": 0, 00:20:34.012 "rdma_cm_event_timeout_ms": 0, 00:20:34.012 "dhchap_digests": [ 00:20:34.012 "sha256", 00:20:34.012 "sha384", 00:20:34.012 "sha512" 00:20:34.012 ], 00:20:34.012 "dhchap_dhgroups": [ 00:20:34.012 "null", 00:20:34.012 "ffdhe2048", 00:20:34.012 "ffdhe3072", 00:20:34.012 "ffdhe4096", 00:20:34.012 "ffdhe6144", 00:20:34.012 "ffdhe8192" 00:20:34.012 ] 00:20:34.012 } 00:20:34.012 }, 00:20:34.012 { 00:20:34.012 "method": "bdev_nvme_attach_controller", 00:20:34.012 "params": { 00:20:34.012 "name": "nvme0", 00:20:34.012 "trtype": "TCP", 00:20:34.012 "adrfam": "IPv4", 00:20:34.012 "traddr": "10.0.0.2", 00:20:34.012 "trsvcid": "4420", 00:20:34.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.012 "prchk_reftag": false, 00:20:34.012 "prchk_guard": false, 00:20:34.012 "ctrlr_loss_timeout_sec": 0, 00:20:34.012 "reconnect_delay_sec": 0, 00:20:34.012 "fast_io_fail_timeout_sec": 0, 00:20:34.012 "psk": "key0", 00:20:34.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.012 "hdgst": false, 00:20:34.012 "ddgst": false, 00:20:34.012 "multipath": "multipath" 00:20:34.012 } 00:20:34.012 }, 00:20:34.012 { 00:20:34.012 "method": "bdev_nvme_set_hotplug", 00:20:34.012 "params": { 00:20:34.012 "period_us": 100000, 00:20:34.012 "enable": false 00:20:34.012 } 00:20:34.012 }, 00:20:34.012 { 00:20:34.012 "method": "bdev_enable_histogram", 00:20:34.012 "params": { 00:20:34.012 "name": "nvme0n1", 00:20:34.012 "enable": true 00:20:34.012 } 00:20:34.012 }, 00:20:34.012 { 00:20:34.012 "method": "bdev_wait_for_examine" 00:20:34.012 } 00:20:34.012 ] 00:20:34.012 }, 00:20:34.012 { 00:20:34.012 "subsystem": "nbd", 00:20:34.012 "config": [] 00:20:34.012 } 00:20:34.012 ] 00:20:34.012 }' 00:20:34.012 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2434415 00:20:34.012 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2434415 ']' 00:20:34.012 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2434415 00:20:34.012 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:34.012 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:34.012 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2434415 00:20:34.012 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:34.012 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:34.012 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2434415' 00:20:34.012 killing process with pid 2434415 00:20:34.012 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2434415 00:20:34.012 Received shutdown signal, test time was about 1.000000 seconds 00:20:34.012 00:20:34.012 Latency(us) 00:20:34.012 [2024-11-06T13:02:20.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.012 [2024-11-06T13:02:20.292Z] =================================================================================================================== 00:20:34.012 [2024-11-06T13:02:20.292Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:34.012 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2434415 00:20:34.012 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2434162 00:20:34.012 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2434162 ']' 00:20:34.012 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2434162 00:20:34.012 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:34.012 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:34.273 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2434162 00:20:34.273 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:34.273 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:34.273 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2434162' 00:20:34.273 killing process with pid 2434162 00:20:34.273 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2434162 00:20:34.273 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2434162 00:20:34.273 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:34.273 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:34.273 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:34.273 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.273 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:34.273 "subsystems": [ 00:20:34.273 { 00:20:34.273 "subsystem": "keyring", 00:20:34.273 "config": [ 00:20:34.273 { 00:20:34.273 "method": "keyring_file_add_key", 00:20:34.273 "params": { 00:20:34.273 "name": "key0", 00:20:34.273 "path": "/tmp/tmp.gBlnJbMHZu" 00:20:34.273 } 00:20:34.273 } 00:20:34.273 ] 00:20:34.273 }, 00:20:34.273 { 00:20:34.273 "subsystem": "iobuf", 00:20:34.273 "config": [ 00:20:34.273 { 00:20:34.273 "method": "iobuf_set_options", 00:20:34.273 "params": { 00:20:34.273 "small_pool_count": 8192, 00:20:34.273 "large_pool_count": 1024, 00:20:34.273 "small_bufsize": 8192, 00:20:34.273 "large_bufsize": 135168, 00:20:34.273 "enable_numa": false 00:20:34.273 } 00:20:34.273 } 00:20:34.273 ] 00:20:34.273 }, 00:20:34.273 { 00:20:34.273 "subsystem": "sock", 00:20:34.273 "config": [ 00:20:34.273 { 00:20:34.273 "method": "sock_set_default_impl", 00:20:34.273 "params": { 00:20:34.273 "impl_name": "posix" 00:20:34.273 } 00:20:34.273 }, 00:20:34.273 { 00:20:34.273 "method": "sock_impl_set_options", 00:20:34.273 "params": { 00:20:34.273 "impl_name": "ssl", 00:20:34.273 "recv_buf_size": 4096, 00:20:34.273 "send_buf_size": 4096, 00:20:34.273 "enable_recv_pipe": true, 00:20:34.273 "enable_quickack": false, 00:20:34.273 "enable_placement_id": 0, 00:20:34.273 "enable_zerocopy_send_server": true, 00:20:34.273 "enable_zerocopy_send_client": false, 00:20:34.273 "zerocopy_threshold": 0, 00:20:34.273 "tls_version": 0, 00:20:34.273 "enable_ktls": false 00:20:34.273 } 00:20:34.273 }, 00:20:34.273 { 00:20:34.273 "method": "sock_impl_set_options", 00:20:34.273 "params": { 00:20:34.273 "impl_name": "posix", 00:20:34.273 "recv_buf_size": 2097152, 00:20:34.273 "send_buf_size": 2097152, 00:20:34.273 "enable_recv_pipe": true, 00:20:34.273 "enable_quickack": false, 00:20:34.273 "enable_placement_id": 0, 00:20:34.273 "enable_zerocopy_send_server": true, 00:20:34.273 "enable_zerocopy_send_client": false, 00:20:34.273 "zerocopy_threshold": 0, 00:20:34.273 "tls_version": 0, 00:20:34.273 "enable_ktls": false 00:20:34.273 } 00:20:34.273 } 00:20:34.273 ] 00:20:34.273 }, 00:20:34.273 { 00:20:34.273 "subsystem": "vmd", 00:20:34.273 "config": [] 00:20:34.273 }, 00:20:34.273 { 00:20:34.273 "subsystem": "accel", 00:20:34.273 "config": [ 00:20:34.273 { 00:20:34.273 "method": "accel_set_options", 00:20:34.273 "params": { 00:20:34.273 "small_cache_size": 128, 00:20:34.273 "large_cache_size": 16, 00:20:34.273 "task_count": 2048, 00:20:34.273 "sequence_count": 2048, 00:20:34.273 "buf_count": 2048 00:20:34.273 } 00:20:34.273 } 00:20:34.273 ] 00:20:34.273 }, 00:20:34.273 { 00:20:34.273 "subsystem": "bdev", 00:20:34.273 "config": [ 00:20:34.273 { 00:20:34.273 "method": "bdev_set_options", 00:20:34.273 "params": { 00:20:34.273 "bdev_io_pool_size": 65535, 00:20:34.273 "bdev_io_cache_size": 256, 00:20:34.273 "bdev_auto_examine": true, 00:20:34.273 "iobuf_small_cache_size": 128, 00:20:34.273 "iobuf_large_cache_size": 16 00:20:34.273 } 00:20:34.273 }, 00:20:34.273 { 00:20:34.273 "method": "bdev_raid_set_options", 00:20:34.273 "params": { 00:20:34.273 "process_window_size_kb": 1024, 00:20:34.273 "process_max_bandwidth_mb_sec": 0 00:20:34.273 } 00:20:34.273 }, 00:20:34.273 { 00:20:34.273 "method": "bdev_iscsi_set_options", 00:20:34.273 "params": { 00:20:34.273 "timeout_sec": 30 00:20:34.273 } 00:20:34.273 }, 00:20:34.273 { 00:20:34.273 "method": "bdev_nvme_set_options", 00:20:34.273 "params": { 00:20:34.273 "action_on_timeout": "none", 00:20:34.273 "timeout_us": 0, 00:20:34.273 "timeout_admin_us": 0, 00:20:34.273 "keep_alive_timeout_ms": 10000, 00:20:34.273 "arbitration_burst": 0, 00:20:34.273 "low_priority_weight": 0, 00:20:34.273 "medium_priority_weight": 0, 00:20:34.273 "high_priority_weight": 0, 00:20:34.273 "nvme_adminq_poll_period_us": 10000, 00:20:34.273 "nvme_ioq_poll_period_us": 0, 00:20:34.273 "io_queue_requests": 0, 00:20:34.273 "delay_cmd_submit": true, 00:20:34.273 "transport_retry_count": 4, 00:20:34.273 "bdev_retry_count": 3, 00:20:34.273 "transport_ack_timeout": 0, 00:20:34.273 "ctrlr_loss_timeout_sec": 0, 00:20:34.273 "reconnect_delay_sec": 0, 00:20:34.273 "fast_io_fail_timeout_sec": 0, 00:20:34.273 "disable_auto_failback": false, 00:20:34.273 "generate_uuids": false, 00:20:34.273 "transport_tos": 0, 00:20:34.274 "nvme_error_stat": false, 00:20:34.274 "rdma_srq_size": 0, 00:20:34.274 "io_path_stat": false, 00:20:34.274 "allow_accel_sequence": false, 00:20:34.274 "rdma_max_cq_size": 0, 00:20:34.274 "rdma_cm_event_timeout_ms": 0, 00:20:34.274 "dhchap_digests": [ 00:20:34.274 "sha256", 00:20:34.274 "sha384", 00:20:34.274 "sha512" 00:20:34.274 ], 00:20:34.274 "dhchap_dhgroups": [ 00:20:34.274 "null", 00:20:34.274 "ffdhe2048", 00:20:34.274 "ffdhe3072", 00:20:34.274 "ffdhe4096", 00:20:34.274 "ffdhe6144", 00:20:34.274 "ffdhe8192" 00:20:34.274 ] 00:20:34.274 } 00:20:34.274 }, 00:20:34.274 { 00:20:34.274 "method": "bdev_nvme_set_hotplug", 00:20:34.274 "params": { 00:20:34.274 "period_us": 100000, 00:20:34.274 "enable": false 00:20:34.274 } 00:20:34.274 }, 00:20:34.274 { 00:20:34.274 "method": "bdev_malloc_create", 00:20:34.274 "params": { 00:20:34.274 "name": "malloc0", 00:20:34.274 "num_blocks": 8192, 00:20:34.274 "block_size": 4096, 00:20:34.274 "physical_block_size": 4096, 00:20:34.274 "uuid": "2f873ca7-6c73-454b-ab7a-ee1c655c2329", 00:20:34.274 "optimal_io_boundary": 0, 00:20:34.274 "md_size": 0, 00:20:34.274 "dif_type": 0, 00:20:34.274 "dif_is_head_of_md": false, 00:20:34.274 "dif_pi_format": 0 00:20:34.274 } 00:20:34.274 }, 00:20:34.274 { 00:20:34.274 "method": "bdev_wait_for_examine" 00:20:34.274 } 00:20:34.274 ] 00:20:34.274 }, 00:20:34.274 { 00:20:34.274 "subsystem": "nbd", 00:20:34.274 "config": [] 00:20:34.274 }, 00:20:34.274 { 00:20:34.274 "subsystem": "scheduler", 00:20:34.274 "config": [ 00:20:34.274 { 00:20:34.274 "method": "framework_set_scheduler", 00:20:34.274 "params": { 00:20:34.274 "name": "static" 00:20:34.274 } 00:20:34.274 } 00:20:34.274 ] 00:20:34.274 }, 00:20:34.274 { 00:20:34.274 "subsystem": "nvmf", 00:20:34.274 "config": [ 00:20:34.274 { 00:20:34.274 "method": "nvmf_set_config", 00:20:34.274 "params": { 00:20:34.274 "discovery_filter": "match_any", 00:20:34.274 "admin_cmd_passthru": { 00:20:34.274 "identify_ctrlr": false 00:20:34.274 }, 00:20:34.274 "dhchap_digests": [ 00:20:34.274 "sha256", 00:20:34.274 "sha384", 00:20:34.274 "sha512" 00:20:34.274 ], 00:20:34.274 "dhchap_dhgroups": [ 00:20:34.274 "null", 00:20:34.274 "ffdhe2048", 00:20:34.274 "ffdhe3072", 00:20:34.274 "ffdhe4096", 00:20:34.274 "ffdhe6144", 00:20:34.274 "ffdhe8192" 00:20:34.274 ] 00:20:34.274 } 00:20:34.274 }, 00:20:34.274 { 00:20:34.274 "method": "nvmf_set_max_subsystems", 00:20:34.274 "params": { 00:20:34.274 "max_subsystems": 1024 00:20:34.274 } 00:20:34.274 }, 00:20:34.274 { 00:20:34.274 "method": "nvmf_set_crdt", 00:20:34.274 "params": { 00:20:34.274 "crdt1": 0, 00:20:34.274 "crdt2": 0, 00:20:34.274 "crdt3": 0 00:20:34.274 } 00:20:34.274 }, 00:20:34.274 { 00:20:34.274 "method": "nvmf_create_transport", 00:20:34.274 "params": { 00:20:34.274 "trtype": "TCP", 00:20:34.274 "max_queue_depth": 128, 00:20:34.274 "max_io_qpairs_per_ctrlr": 127, 00:20:34.274 "in_capsule_data_size": 4096, 00:20:34.274 "max_io_size": 131072, 00:20:34.274 "io_unit_size": 131072, 00:20:34.274 "max_aq_depth": 128, 00:20:34.274 "num_shared_buffers": 511, 00:20:34.274 "buf_cache_size": 4294967295, 00:20:34.274 "dif_insert_or_strip": false, 00:20:34.274 "zcopy": false, 00:20:34.274 "c2h_success": false, 00:20:34.274 "sock_priority": 0, 00:20:34.274 "abort_timeout_sec": 1, 00:20:34.274 "ack_timeout": 0, 00:20:34.274 "data_wr_pool_size": 0 00:20:34.274 } 00:20:34.274 }, 00:20:34.274 { 00:20:34.274 "method": "nvmf_create_subsystem", 00:20:34.274 "params": { 00:20:34.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.274 "allow_any_host": false, 00:20:34.274 "serial_number": "00000000000000000000", 00:20:34.274 "model_number": "SPDK bdev Controller", 00:20:34.274 "max_namespaces": 32, 00:20:34.274 "min_cntlid": 1, 00:20:34.274 "max_cntlid": 65519, 00:20:34.274 "ana_reporting": false 00:20:34.274 } 00:20:34.274 }, 00:20:34.274 { 00:20:34.274 "method": "nvmf_subsystem_add_host", 00:20:34.274 "params": { 00:20:34.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.274 "host": "nqn.2016-06.io.spdk:host1", 00:20:34.274 "psk": "key0" 00:20:34.274 } 00:20:34.274 }, 00:20:34.274 { 00:20:34.274 "method": "nvmf_subsystem_add_ns", 00:20:34.274 "params": { 00:20:34.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.274 "namespace": { 00:20:34.274 "nsid": 1, 00:20:34.274 "bdev_name": "malloc0", 00:20:34.274 "nguid": "2F873CA76C73454BAB7AEE1C655C2329", 00:20:34.274 "uuid": "2f873ca7-6c73-454b-ab7a-ee1c655c2329", 00:20:34.274 "no_auto_visible": false 00:20:34.274 } 00:20:34.274 } 00:20:34.274 }, 00:20:34.274 { 00:20:34.274 "method": "nvmf_subsystem_add_listener", 00:20:34.274 "params": { 00:20:34.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.274 "listen_address": { 00:20:34.274 "trtype": "TCP", 00:20:34.274 "adrfam": "IPv4", 00:20:34.274 "traddr": "10.0.0.2", 00:20:34.274 "trsvcid": "4420" 00:20:34.274 }, 00:20:34.274 "secure_channel": false, 00:20:34.274 "sock_impl": "ssl" 00:20:34.274 } 00:20:34.274 } 00:20:34.274 ] 00:20:34.274 } 00:20:34.274 ] 00:20:34.274 }' 00:20:34.274 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2434934 00:20:34.274 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2434934 00:20:34.274 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:34.274 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2434934 ']' 00:20:34.274 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.274 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:34.274 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.274 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:34.274 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.274 [2024-11-06 14:02:20.516755] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:20:34.274 [2024-11-06 14:02:20.516810] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.535 [2024-11-06 14:02:20.608627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.535 [2024-11-06 14:02:20.640714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.535 [2024-11-06 14:02:20.640752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.535 [2024-11-06 14:02:20.640758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.535 [2024-11-06 14:02:20.640763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.535 [2024-11-06 14:02:20.640767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.535 [2024-11-06 14:02:20.641304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.795 [2024-11-06 14:02:20.836128] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.795 [2024-11-06 14:02:20.868163] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:34.795 [2024-11-06 14:02:20.868351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.055 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:35.055 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:35.055 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.055 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:35.055 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.318 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.318 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2435229 00:20:35.318 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2435229 /var/tmp/bdevperf.sock 00:20:35.318 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2435229 ']' 00:20:35.318 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.318 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:35.318 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.318 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:35.318 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:35.318 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.318 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:35.318 "subsystems": [ 00:20:35.318 { 00:20:35.318 "subsystem": "keyring", 00:20:35.318 "config": [ 00:20:35.318 { 00:20:35.318 "method": "keyring_file_add_key", 00:20:35.318 "params": { 00:20:35.318 "name": "key0", 00:20:35.318 "path": "/tmp/tmp.gBlnJbMHZu" 00:20:35.318 } 00:20:35.318 } 00:20:35.318 ] 00:20:35.318 }, 00:20:35.318 { 00:20:35.318 "subsystem": "iobuf", 00:20:35.318 "config": [ 00:20:35.318 { 00:20:35.318 "method": "iobuf_set_options", 00:20:35.318 "params": { 00:20:35.318 "small_pool_count": 8192, 00:20:35.318 "large_pool_count": 1024, 00:20:35.318 "small_bufsize": 8192, 00:20:35.318 "large_bufsize": 135168, 00:20:35.318 "enable_numa": false 00:20:35.318 } 00:20:35.318 } 00:20:35.318 ] 00:20:35.318 }, 00:20:35.318 { 00:20:35.318 "subsystem": "sock", 00:20:35.318 "config": [ 00:20:35.318 { 00:20:35.318 "method": "sock_set_default_impl", 00:20:35.318 "params": { 00:20:35.318 "impl_name": "posix" 00:20:35.318 } 00:20:35.318 }, 00:20:35.318 { 00:20:35.318 "method": "sock_impl_set_options", 00:20:35.318 "params": { 00:20:35.318 "impl_name": "ssl", 00:20:35.318 "recv_buf_size": 4096, 00:20:35.318 "send_buf_size": 4096, 00:20:35.318 "enable_recv_pipe": true, 00:20:35.318 "enable_quickack": false, 00:20:35.318 "enable_placement_id": 0, 00:20:35.318 "enable_zerocopy_send_server": true, 00:20:35.318 "enable_zerocopy_send_client": false, 00:20:35.318 "zerocopy_threshold": 0, 00:20:35.318 "tls_version": 0, 00:20:35.318 "enable_ktls": false 00:20:35.318 } 00:20:35.318 }, 00:20:35.318 { 00:20:35.318 "method": "sock_impl_set_options", 00:20:35.318 "params": { 00:20:35.318 "impl_name": "posix", 00:20:35.318 "recv_buf_size": 2097152, 00:20:35.318 "send_buf_size": 2097152, 00:20:35.318 "enable_recv_pipe": true, 00:20:35.318 "enable_quickack": false, 00:20:35.318 "enable_placement_id": 0, 00:20:35.318 "enable_zerocopy_send_server": true, 00:20:35.318 "enable_zerocopy_send_client": false, 00:20:35.318 "zerocopy_threshold": 0, 00:20:35.318 "tls_version": 0, 00:20:35.318 "enable_ktls": false 00:20:35.318 } 00:20:35.318 } 00:20:35.318 ] 00:20:35.318 }, 00:20:35.318 { 00:20:35.318 "subsystem": "vmd", 00:20:35.318 "config": [] 00:20:35.318 }, 00:20:35.318 { 00:20:35.318 "subsystem": "accel", 00:20:35.318 "config": [ 00:20:35.318 { 00:20:35.318 "method": "accel_set_options", 00:20:35.318 "params": { 00:20:35.318 "small_cache_size": 128, 00:20:35.318 "large_cache_size": 16, 00:20:35.318 "task_count": 2048, 00:20:35.318 "sequence_count": 2048, 00:20:35.318 "buf_count": 2048 00:20:35.318 } 00:20:35.318 } 00:20:35.318 ] 00:20:35.318 }, 00:20:35.318 { 00:20:35.318 "subsystem": "bdev", 00:20:35.318 "config": [ 00:20:35.318 { 00:20:35.318 "method": "bdev_set_options", 00:20:35.318 "params": { 00:20:35.318 "bdev_io_pool_size": 65535, 00:20:35.318 "bdev_io_cache_size": 256, 00:20:35.318 "bdev_auto_examine": true, 00:20:35.318 "iobuf_small_cache_size": 128, 00:20:35.318 "iobuf_large_cache_size": 16 00:20:35.318 } 00:20:35.318 }, 00:20:35.318 { 00:20:35.318 "method": "bdev_raid_set_options", 00:20:35.318 "params": { 00:20:35.318 "process_window_size_kb": 1024, 00:20:35.318 "process_max_bandwidth_mb_sec": 0 00:20:35.318 } 00:20:35.318 }, 00:20:35.318 { 00:20:35.318 "method": "bdev_iscsi_set_options", 00:20:35.318 "params": { 00:20:35.318 "timeout_sec": 30 00:20:35.318 } 00:20:35.318 }, 00:20:35.318 { 00:20:35.318 "method": "bdev_nvme_set_options", 00:20:35.318 "params": { 00:20:35.318 "action_on_timeout": "none", 00:20:35.318 "timeout_us": 0, 00:20:35.318 "timeout_admin_us": 0, 00:20:35.318 "keep_alive_timeout_ms": 10000, 00:20:35.318 "arbitration_burst": 0, 00:20:35.318 "low_priority_weight": 0, 00:20:35.318 "medium_priority_weight": 0, 00:20:35.318 "high_priority_weight": 0, 00:20:35.318 "nvme_adminq_poll_period_us": 10000, 00:20:35.318 "nvme_ioq_poll_period_us": 0, 00:20:35.318 "io_queue_requests": 512, 00:20:35.318 "delay_cmd_submit": true, 00:20:35.318 "transport_retry_count": 4, 00:20:35.318 "bdev_retry_count": 3, 00:20:35.318 "transport_ack_timeout": 0, 00:20:35.318 "ctrlr_loss_timeout_sec": 0, 00:20:35.318 "reconnect_delay_sec": 0, 00:20:35.318 "fast_io_fail_timeout_sec": 0, 00:20:35.318 "disable_auto_failback": false, 00:20:35.318 "generate_uuids": false, 00:20:35.318 "transport_tos": 0, 00:20:35.318 "nvme_error_stat": false, 00:20:35.318 "rdma_srq_size": 0, 00:20:35.318 "io_path_stat": false, 00:20:35.318 "allow_accel_sequence": false, 00:20:35.318 "rdma_max_cq_size": 0, 00:20:35.318 "rdma_cm_event_timeout_ms": 0, 00:20:35.318 "dhchap_digests": [ 00:20:35.318 "sha256", 00:20:35.318 "sha384", 00:20:35.318 "sha512" 00:20:35.318 ], 00:20:35.318 "dhchap_dhgroups": [ 00:20:35.318 "null", 00:20:35.318 "ffdhe2048", 00:20:35.318 "ffdhe3072", 00:20:35.318 "ffdhe4096", 00:20:35.318 "ffdhe6144", 00:20:35.318 "ffdhe8192" 00:20:35.318 ] 00:20:35.318 } 00:20:35.318 }, 00:20:35.318 { 00:20:35.318 "method": "bdev_nvme_attach_controller", 00:20:35.318 "params": { 00:20:35.318 "name": "nvme0", 00:20:35.318 "trtype": "TCP", 00:20:35.318 "adrfam": "IPv4", 00:20:35.319 "traddr": "10.0.0.2", 00:20:35.319 "trsvcid": "4420", 00:20:35.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.319 "prchk_reftag": false, 00:20:35.319 "prchk_guard": false, 00:20:35.319 "ctrlr_loss_timeout_sec": 0, 00:20:35.319 "reconnect_delay_sec": 0, 00:20:35.319 "fast_io_fail_timeout_sec": 0, 00:20:35.319 "psk": "key0", 00:20:35.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.319 "hdgst": false, 00:20:35.319 "ddgst": false, 00:20:35.319 "multipath": "multipath" 00:20:35.319 } 00:20:35.319 }, 00:20:35.319 { 00:20:35.319 "method": "bdev_nvme_set_hotplug", 00:20:35.319 "params": { 00:20:35.319 "period_us": 100000, 00:20:35.319 "enable": false 00:20:35.319 } 00:20:35.319 }, 00:20:35.319 { 00:20:35.319 "method": "bdev_enable_histogram", 00:20:35.319 "params": { 00:20:35.319 "name": "nvme0n1", 00:20:35.319 "enable": true 00:20:35.319 } 00:20:35.319 }, 00:20:35.319 { 00:20:35.319 "method": "bdev_wait_for_examine" 00:20:35.319 } 00:20:35.319 ] 00:20:35.319 }, 00:20:35.319 { 00:20:35.319 "subsystem": "nbd", 00:20:35.319 "config": [] 00:20:35.319 } 00:20:35.319 ] 00:20:35.319 }' 00:20:35.319 [2024-11-06 14:02:21.422404] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:20:35.319 [2024-11-06 14:02:21.422459] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2435229 ] 00:20:35.319 [2024-11-06 14:02:21.507550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.319 [2024-11-06 14:02:21.537381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.579 [2024-11-06 14:02:21.673435] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.152 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:36.152 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:36.152 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:36.152 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:36.152 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.152 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:36.412 Running I/O for 1 seconds... 00:20:37.353 4643.00 IOPS, 18.14 MiB/s 00:20:37.353 Latency(us) 00:20:37.353 [2024-11-06T13:02:23.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.353 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:37.353 Verification LBA range: start 0x0 length 0x2000 00:20:37.353 nvme0n1 : 1.04 4602.76 17.98 0.00 0.00 27377.91 4532.91 34297.17 00:20:37.353 [2024-11-06T13:02:23.633Z] =================================================================================================================== 00:20:37.353 [2024-11-06T13:02:23.633Z] Total : 4602.76 17.98 0.00 0.00 27377.91 4532.91 34297.17 00:20:37.353 { 00:20:37.353 "results": [ 00:20:37.353 { 00:20:37.353 "job": "nvme0n1", 00:20:37.353 "core_mask": "0x2", 00:20:37.353 "workload": "verify", 00:20:37.353 "status": "finished", 00:20:37.353 "verify_range": { 00:20:37.353 "start": 0, 00:20:37.353 "length": 8192 00:20:37.353 }, 00:20:37.353 "queue_depth": 128, 00:20:37.353 "io_size": 4096, 00:20:37.353 "runtime": 1.036552, 00:20:37.353 "iops": 4602.759919425172, 00:20:37.353 "mibps": 17.979530935254576, 00:20:37.353 "io_failed": 0, 00:20:37.353 "io_timeout": 0, 00:20:37.353 "avg_latency_us": 27377.909536784744, 00:20:37.353 "min_latency_us": 4532.906666666667, 00:20:37.353 "max_latency_us": 34297.17333333333 00:20:37.353 } 00:20:37.353 ], 00:20:37.353 "core_count": 1 00:20:37.353 } 00:20:37.353 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:37.353 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:37.353 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:37.353 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:20:37.353 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:20:37.353 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:37.353 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:37.353 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:37.353 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:37.353 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:37.353 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:37.353 nvmf_trace.0 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2435229 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2435229 ']' 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2435229 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2435229 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2435229' 00:20:37.614 killing process with pid 2435229 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2435229 00:20:37.614 Received shutdown signal, test time was about 1.000000 seconds 00:20:37.614 00:20:37.614 Latency(us) 00:20:37.614 [2024-11-06T13:02:23.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.614 [2024-11-06T13:02:23.894Z] =================================================================================================================== 00:20:37.614 [2024-11-06T13:02:23.894Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2435229 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:37.614 rmmod nvme_tcp 00:20:37.614 rmmod nvme_fabrics 00:20:37.614 rmmod nvme_keyring 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2434934 ']' 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2434934 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2434934 ']' 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2434934 00:20:37.614 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:37.875 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:37.875 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2434934 00:20:37.875 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:37.875 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:37.875 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2434934' 00:20:37.875 killing process with pid 2434934 00:20:37.875 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2434934 00:20:37.875 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2434934 00:20:37.875 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:37.875 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:37.875 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:37.875 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:37.875 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:37.875 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:37.875 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:37.875 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:37.875 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:37.875 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.875 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.875 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.nHv1jZXlG7 /tmp/tmp.Hit83VOK7u /tmp/tmp.gBlnJbMHZu 00:20:40.422 00:20:40.422 real 1m27.981s 00:20:40.422 user 2m17.788s 00:20:40.422 sys 0m27.138s 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.422 ************************************ 00:20:40.422 END TEST nvmf_tls 00:20:40.422 ************************************ 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:40.422 ************************************ 00:20:40.422 START TEST nvmf_fips 00:20:40.422 ************************************ 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:40.422 * Looking for test storage... 00:20:40.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:40.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.422 --rc genhtml_branch_coverage=1 00:20:40.422 --rc genhtml_function_coverage=1 00:20:40.422 --rc genhtml_legend=1 00:20:40.422 --rc geninfo_all_blocks=1 00:20:40.422 --rc geninfo_unexecuted_blocks=1 00:20:40.422 00:20:40.422 ' 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:40.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.422 --rc genhtml_branch_coverage=1 00:20:40.422 --rc genhtml_function_coverage=1 00:20:40.422 --rc genhtml_legend=1 00:20:40.422 --rc geninfo_all_blocks=1 00:20:40.422 --rc geninfo_unexecuted_blocks=1 00:20:40.422 00:20:40.422 ' 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:40.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.422 --rc genhtml_branch_coverage=1 00:20:40.422 --rc genhtml_function_coverage=1 00:20:40.422 --rc genhtml_legend=1 00:20:40.422 --rc geninfo_all_blocks=1 00:20:40.422 --rc geninfo_unexecuted_blocks=1 00:20:40.422 00:20:40.422 ' 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:40.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.422 --rc genhtml_branch_coverage=1 00:20:40.422 --rc genhtml_function_coverage=1 00:20:40.422 --rc genhtml_legend=1 00:20:40.422 --rc geninfo_all_blocks=1 00:20:40.422 --rc geninfo_unexecuted_blocks=1 00:20:40.422 00:20:40.422 ' 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.422 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:40.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:40.423 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:40.424 Error setting digest 00:20:40.424 40125F15567F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:40.424 40125F15567F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:40.424 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:48.566 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:48.566 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.566 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:48.567 Found net devices under 0000:31:00.0: cvl_0_0 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:48.567 Found net devices under 0000:31:00.1: cvl_0_1 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:48.567 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:48.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:20:48.567 00:20:48.567 --- 10.0.0.2 ping statistics --- 00:20:48.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.567 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:48.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:20:48.567 00:20:48.567 --- 10.0.0.1 ping statistics --- 00:20:48.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.567 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2439960 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2439960 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 2439960 ']' 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:48.567 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:48.567 [2024-11-06 14:02:34.335565] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:20:48.567 [2024-11-06 14:02:34.335636] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.567 [2024-11-06 14:02:34.436749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.567 [2024-11-06 14:02:34.486963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.567 [2024-11-06 14:02:34.487007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.567 [2024-11-06 14:02:34.487016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.567 [2024-11-06 14:02:34.487024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.567 [2024-11-06 14:02:34.487030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.567 [2024-11-06 14:02:34.487855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.139 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:49.139 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:49.139 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:49.139 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:49.139 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:49.139 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.139 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:49.139 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:49.139 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:49.139 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.ism 00:20:49.139 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:49.139 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.ism 00:20:49.139 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.ism 00:20:49.139 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.ism 00:20:49.139 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:49.139 [2024-11-06 14:02:35.347557] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.139 [2024-11-06 14:02:35.363555] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:49.139 [2024-11-06 14:02:35.363859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.139 malloc0 00:20:49.400 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:49.400 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2440321 00:20:49.400 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2440321 /var/tmp/bdevperf.sock 00:20:49.400 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:49.400 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 2440321 ']' 00:20:49.400 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.400 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:49.400 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.400 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:49.400 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:49.400 [2024-11-06 14:02:35.508804] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:20:49.400 [2024-11-06 14:02:35.508883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2440321 ] 00:20:49.400 [2024-11-06 14:02:35.605015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.400 [2024-11-06 14:02:35.655921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.344 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:50.344 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:50.344 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.ism 00:20:50.344 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:50.605 [2024-11-06 14:02:36.676401] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:50.605 TLSTESTn1 00:20:50.605 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:50.867 Running I/O for 10 seconds... 00:20:52.750 4213.00 IOPS, 16.46 MiB/s [2024-11-06T13:02:39.972Z] 5156.00 IOPS, 20.14 MiB/s [2024-11-06T13:02:40.912Z] 5409.00 IOPS, 21.13 MiB/s [2024-11-06T13:02:42.295Z] 5344.00 IOPS, 20.88 MiB/s [2024-11-06T13:02:43.238Z] 5410.80 IOPS, 21.14 MiB/s [2024-11-06T13:02:44.178Z] 5573.17 IOPS, 21.77 MiB/s [2024-11-06T13:02:45.117Z] 5580.14 IOPS, 21.80 MiB/s [2024-11-06T13:02:46.056Z] 5661.88 IOPS, 22.12 MiB/s [2024-11-06T13:02:47.027Z] 5747.78 IOPS, 22.45 MiB/s [2024-11-06T13:02:47.027Z] 5727.00 IOPS, 22.37 MiB/s 00:21:00.747 Latency(us) 00:21:00.747 [2024-11-06T13:02:47.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.747 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:00.747 Verification LBA range: start 0x0 length 0x2000 00:21:00.747 TLSTESTn1 : 10.01 5733.11 22.39 0.00 0.00 22293.78 4669.44 64662.19 00:21:00.747 [2024-11-06T13:02:47.027Z] =================================================================================================================== 00:21:00.747 [2024-11-06T13:02:47.027Z] Total : 5733.11 22.39 0.00 0.00 22293.78 4669.44 64662.19 00:21:00.747 { 00:21:00.747 "results": [ 00:21:00.747 { 00:21:00.747 "job": "TLSTESTn1", 00:21:00.747 "core_mask": "0x4", 00:21:00.747 "workload": "verify", 00:21:00.747 "status": "finished", 00:21:00.747 "verify_range": { 00:21:00.747 "start": 0, 00:21:00.747 "length": 8192 00:21:00.747 }, 00:21:00.747 "queue_depth": 128, 00:21:00.747 "io_size": 4096, 00:21:00.747 "runtime": 10.011323, 00:21:00.747 "iops": 5733.1084013571435, 00:21:00.747 "mibps": 22.394954692801342, 00:21:00.747 "io_failed": 0, 00:21:00.747 "io_timeout": 0, 00:21:00.747 "avg_latency_us": 22293.780725253793, 00:21:00.747 "min_latency_us": 4669.44, 00:21:00.747 "max_latency_us": 64662.18666666667 00:21:00.747 } 00:21:00.747 ], 00:21:00.747 "core_count": 1 00:21:00.747 } 00:21:00.747 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:00.747 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:00.747 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:21:00.748 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:21:00.748 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:21:00.748 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:00.748 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:21:00.748 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:21:00.748 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:21:00.748 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:00.748 nvmf_trace.0 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2440321 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 2440321 ']' 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 2440321 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2440321 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2440321' 00:21:01.008 killing process with pid 2440321 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 2440321 00:21:01.008 Received shutdown signal, test time was about 10.000000 seconds 00:21:01.008 00:21:01.008 Latency(us) 00:21:01.008 [2024-11-06T13:02:47.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.008 [2024-11-06T13:02:47.288Z] =================================================================================================================== 00:21:01.008 [2024-11-06T13:02:47.288Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 2440321 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:01.008 rmmod nvme_tcp 00:21:01.008 rmmod nvme_fabrics 00:21:01.008 rmmod nvme_keyring 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2439960 ']' 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2439960 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 2439960 ']' 00:21:01.008 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 2439960 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2439960 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2439960' 00:21:01.269 killing process with pid 2439960 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 2439960 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 2439960 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.269 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.ism 00:21:03.816 00:21:03.816 real 0m23.318s 00:21:03.816 user 0m25.116s 00:21:03.816 sys 0m9.577s 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:03.816 ************************************ 00:21:03.816 END TEST nvmf_fips 00:21:03.816 ************************************ 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:03.816 ************************************ 00:21:03.816 START TEST nvmf_control_msg_list 00:21:03.816 ************************************ 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:03.816 * Looking for test storage... 00:21:03.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:03.816 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:03.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.817 --rc genhtml_branch_coverage=1 00:21:03.817 --rc genhtml_function_coverage=1 00:21:03.817 --rc genhtml_legend=1 00:21:03.817 --rc geninfo_all_blocks=1 00:21:03.817 --rc geninfo_unexecuted_blocks=1 00:21:03.817 00:21:03.817 ' 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:03.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.817 --rc genhtml_branch_coverage=1 00:21:03.817 --rc genhtml_function_coverage=1 00:21:03.817 --rc genhtml_legend=1 00:21:03.817 --rc geninfo_all_blocks=1 00:21:03.817 --rc geninfo_unexecuted_blocks=1 00:21:03.817 00:21:03.817 ' 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:03.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.817 --rc genhtml_branch_coverage=1 00:21:03.817 --rc genhtml_function_coverage=1 00:21:03.817 --rc genhtml_legend=1 00:21:03.817 --rc geninfo_all_blocks=1 00:21:03.817 --rc geninfo_unexecuted_blocks=1 00:21:03.817 00:21:03.817 ' 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:03.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.817 --rc genhtml_branch_coverage=1 00:21:03.817 --rc genhtml_function_coverage=1 00:21:03.817 --rc genhtml_legend=1 00:21:03.817 --rc geninfo_all_blocks=1 00:21:03.817 --rc geninfo_unexecuted_blocks=1 00:21:03.817 00:21:03.817 ' 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:03.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:03.817 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:12.062 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:12.063 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:12.063 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:12.063 Found net devices under 0000:31:00.0: cvl_0_0 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:12.063 Found net devices under 0000:31:00.1: cvl_0_1 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:12.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:21:12.063 00:21:12.063 --- 10.0.0.2 ping statistics --- 00:21:12.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.063 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:21:12.063 00:21:12.063 --- 10.0.0.1 ping statistics --- 00:21:12.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.063 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2446709 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2446709 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 2446709 ']' 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:12.063 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.063 [2024-11-06 14:02:57.570148] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:21:12.063 [2024-11-06 14:02:57.570232] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.063 [2024-11-06 14:02:57.671426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.063 [2024-11-06 14:02:57.721860] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.063 [2024-11-06 14:02:57.721908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.063 [2024-11-06 14:02:57.721917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.063 [2024-11-06 14:02:57.721924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.063 [2024-11-06 14:02:57.721931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.063 [2024-11-06 14:02:57.722684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.326 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:12.326 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:21:12.326 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:12.326 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:12.326 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.326 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.326 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.327 [2024-11-06 14:02:58.418612] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.327 Malloc0 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.327 [2024-11-06 14:02:58.473014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2446996 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2446998 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2447000 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2446996 00:21:12.327 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:12.327 [2024-11-06 14:02:58.573907] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:12.327 [2024-11-06 14:02:58.574285] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:12.327 [2024-11-06 14:02:58.574587] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:13.713 Initializing NVMe Controllers 00:21:13.713 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:13.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:13.713 Initialization complete. Launching workers. 00:21:13.713 ======================================================== 00:21:13.713 Latency(us) 00:21:13.713 Device Information : IOPS MiB/s Average min max 00:21:13.713 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1534.00 5.99 651.97 211.01 977.07 00:21:13.713 ======================================================== 00:21:13.713 Total : 1534.00 5.99 651.97 211.01 977.07 00:21:13.713 00:21:13.713 Initializing NVMe Controllers 00:21:13.713 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:13.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:13.713 Initialization complete. Launching workers. 00:21:13.713 ======================================================== 00:21:13.713 Latency(us) 00:21:13.713 Device Information : IOPS MiB/s Average min max 00:21:13.713 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40929.66 40756.62 41582.53 00:21:13.713 ======================================================== 00:21:13.713 Total : 25.00 0.10 40929.66 40756.62 41582.53 00:21:13.713 00:21:13.713 Initializing NVMe Controllers 00:21:13.713 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:13.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:13.713 Initialization complete. Launching workers. 00:21:13.713 ======================================================== 00:21:13.713 Latency(us) 00:21:13.713 Device Information : IOPS MiB/s Average min max 00:21:13.713 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1499.00 5.86 667.10 154.30 893.30 00:21:13.713 ======================================================== 00:21:13.713 Total : 1499.00 5.86 667.10 154.30 893.30 00:21:13.713 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2446998 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2447000 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:13.713 rmmod nvme_tcp 00:21:13.713 rmmod nvme_fabrics 00:21:13.713 rmmod nvme_keyring 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2446709 ']' 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2446709 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 2446709 ']' 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 2446709 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2446709 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2446709' 00:21:13.713 killing process with pid 2446709 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 2446709 00:21:13.713 14:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 2446709 00:21:13.977 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:13.977 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:13.977 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:13.977 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:13.977 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:13.977 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:13.977 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:13.977 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:13.977 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:13.977 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.977 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.977 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.892 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:15.892 00:21:15.892 real 0m12.468s 00:21:15.892 user 0m7.851s 00:21:15.892 sys 0m6.620s 00:21:15.892 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:15.892 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:15.892 ************************************ 00:21:15.892 END TEST nvmf_control_msg_list 00:21:15.892 ************************************ 00:21:15.892 14:03:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:15.892 14:03:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:15.892 14:03:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:15.892 14:03:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:16.153 ************************************ 00:21:16.153 START TEST nvmf_wait_for_buf 00:21:16.154 ************************************ 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:16.154 * Looking for test storage... 00:21:16.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:16.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.154 --rc genhtml_branch_coverage=1 00:21:16.154 --rc genhtml_function_coverage=1 00:21:16.154 --rc genhtml_legend=1 00:21:16.154 --rc geninfo_all_blocks=1 00:21:16.154 --rc geninfo_unexecuted_blocks=1 00:21:16.154 00:21:16.154 ' 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:16.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.154 --rc genhtml_branch_coverage=1 00:21:16.154 --rc genhtml_function_coverage=1 00:21:16.154 --rc genhtml_legend=1 00:21:16.154 --rc geninfo_all_blocks=1 00:21:16.154 --rc geninfo_unexecuted_blocks=1 00:21:16.154 00:21:16.154 ' 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:16.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.154 --rc genhtml_branch_coverage=1 00:21:16.154 --rc genhtml_function_coverage=1 00:21:16.154 --rc genhtml_legend=1 00:21:16.154 --rc geninfo_all_blocks=1 00:21:16.154 --rc geninfo_unexecuted_blocks=1 00:21:16.154 00:21:16.154 ' 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:16.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.154 --rc genhtml_branch_coverage=1 00:21:16.154 --rc genhtml_function_coverage=1 00:21:16.154 --rc genhtml_legend=1 00:21:16.154 --rc geninfo_all_blocks=1 00:21:16.154 --rc geninfo_unexecuted_blocks=1 00:21:16.154 00:21:16.154 ' 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.154 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.155 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.155 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.155 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:16.155 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:16.155 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:16.155 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.155 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:16.155 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:16.155 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:16.155 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.155 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.155 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.415 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:16.415 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:16.415 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:16.415 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:24.559 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:24.559 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:24.559 Found net devices under 0000:31:00.0: cvl_0_0 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.559 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:24.560 Found net devices under 0000:31:00.1: cvl_0_1 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:24.560 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:24.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:21:24.560 00:21:24.560 --- 10.0.0.2 ping statistics --- 00:21:24.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.560 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:21:24.560 00:21:24.560 --- 10.0.0.1 ping statistics --- 00:21:24.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.560 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2451762 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2451762 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 2451762 ']' 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:24.560 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.560 [2024-11-06 14:03:10.197106] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:21:24.560 [2024-11-06 14:03:10.197177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.560 [2024-11-06 14:03:10.297101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.560 [2024-11-06 14:03:10.348816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.560 [2024-11-06 14:03:10.348867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.560 [2024-11-06 14:03:10.348876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.560 [2024-11-06 14:03:10.348885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.560 [2024-11-06 14:03:10.348891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.560 [2024-11-06 14:03:10.349720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.822 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:25.084 Malloc0 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:25.084 [2024-11-06 14:03:11.180135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:25.084 [2024-11-06 14:03:11.216655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.084 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:25.084 [2024-11-06 14:03:11.318857] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:27.001 Initializing NVMe Controllers 00:21:27.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:27.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:27.001 Initialization complete. Launching workers. 00:21:27.001 ======================================================== 00:21:27.001 Latency(us) 00:21:27.001 Device Information : IOPS MiB/s Average min max 00:21:27.001 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32295.24 8015.84 63850.14 00:21:27.001 ======================================================== 00:21:27.001 Total : 129.00 16.12 32295.24 8015.84 63850.14 00:21:27.001 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:27.001 rmmod nvme_tcp 00:21:27.001 rmmod nvme_fabrics 00:21:27.001 rmmod nvme_keyring 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2451762 ']' 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2451762 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 2451762 ']' 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 2451762 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:27.001 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2451762 00:21:27.001 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:27.001 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:27.001 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2451762' 00:21:27.001 killing process with pid 2451762 00:21:27.001 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 2451762 00:21:27.001 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 2451762 00:21:27.001 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:27.001 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:27.001 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:27.001 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:27.001 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:27.001 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:27.001 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:27.001 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:27.001 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:27.001 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.001 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.001 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.549 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:29.549 00:21:29.549 real 0m13.099s 00:21:29.549 user 0m5.337s 00:21:29.549 sys 0m6.291s 00:21:29.549 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:29.549 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:29.549 ************************************ 00:21:29.549 END TEST nvmf_wait_for_buf 00:21:29.549 ************************************ 00:21:29.549 14:03:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:29.549 14:03:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:29.549 14:03:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:29.549 14:03:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:29.549 14:03:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:29.549 14:03:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:37.690 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:37.690 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:37.690 Found net devices under 0000:31:00.0: cvl_0_0 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:37.690 Found net devices under 0000:31:00.1: cvl_0_1 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:37.690 ************************************ 00:21:37.690 START TEST nvmf_perf_adq 00:21:37.690 ************************************ 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:37.690 * Looking for test storage... 00:21:37.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:37.690 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:37.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.691 --rc genhtml_branch_coverage=1 00:21:37.691 --rc genhtml_function_coverage=1 00:21:37.691 --rc genhtml_legend=1 00:21:37.691 --rc geninfo_all_blocks=1 00:21:37.691 --rc geninfo_unexecuted_blocks=1 00:21:37.691 00:21:37.691 ' 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:37.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.691 --rc genhtml_branch_coverage=1 00:21:37.691 --rc genhtml_function_coverage=1 00:21:37.691 --rc genhtml_legend=1 00:21:37.691 --rc geninfo_all_blocks=1 00:21:37.691 --rc geninfo_unexecuted_blocks=1 00:21:37.691 00:21:37.691 ' 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:37.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.691 --rc genhtml_branch_coverage=1 00:21:37.691 --rc genhtml_function_coverage=1 00:21:37.691 --rc genhtml_legend=1 00:21:37.691 --rc geninfo_all_blocks=1 00:21:37.691 --rc geninfo_unexecuted_blocks=1 00:21:37.691 00:21:37.691 ' 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:37.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.691 --rc genhtml_branch_coverage=1 00:21:37.691 --rc genhtml_function_coverage=1 00:21:37.691 --rc genhtml_legend=1 00:21:37.691 --rc geninfo_all_blocks=1 00:21:37.691 --rc geninfo_unexecuted_blocks=1 00:21:37.691 00:21:37.691 ' 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:37.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:37.691 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:44.280 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:44.280 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:44.280 Found net devices under 0000:31:00.0: cvl_0_0 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:44.280 Found net devices under 0000:31:00.1: cvl_0_1 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:44.280 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:45.667 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:48.215 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:53.510 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:53.510 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:53.510 Found net devices under 0000:31:00.0: cvl_0_0 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:53.510 Found net devices under 0000:31:00.1: cvl_0_1 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:53.510 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:53.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:21:53.511 00:21:53.511 --- 10.0.0.2 ping statistics --- 00:21:53.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.511 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:21:53.511 00:21:53.511 --- 10.0.0.1 ping statistics --- 00:21:53.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.511 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2462307 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2462307 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 2462307 ']' 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:53.511 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.511 [2024-11-06 14:03:39.524571] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:21:53.511 [2024-11-06 14:03:39.524635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.511 [2024-11-06 14:03:39.628000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:53.511 [2024-11-06 14:03:39.682943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.511 [2024-11-06 14:03:39.682996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.511 [2024-11-06 14:03:39.683005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.511 [2024-11-06 14:03:39.683012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.511 [2024-11-06 14:03:39.683019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.511 [2024-11-06 14:03:39.685212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.511 [2024-11-06 14:03:39.685373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.511 [2024-11-06 14:03:39.685534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.511 [2024-11-06 14:03:39.685535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.081 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:54.081 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:21:54.081 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:54.081 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:54.081 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.342 [2024-11-06 14:03:40.548050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.342 Malloc1 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.342 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.602 [2024-11-06 14:03:40.622655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.602 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.602 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2462656 00:21:54.602 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:54.602 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:56.514 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:56.514 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.514 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.514 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.514 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:56.514 "tick_rate": 2400000000, 00:21:56.514 "poll_groups": [ 00:21:56.514 { 00:21:56.514 "name": "nvmf_tgt_poll_group_000", 00:21:56.514 "admin_qpairs": 1, 00:21:56.514 "io_qpairs": 1, 00:21:56.514 "current_admin_qpairs": 1, 00:21:56.514 "current_io_qpairs": 1, 00:21:56.514 "pending_bdev_io": 0, 00:21:56.514 "completed_nvme_io": 16815, 00:21:56.514 "transports": [ 00:21:56.514 { 00:21:56.514 "trtype": "TCP" 00:21:56.514 } 00:21:56.514 ] 00:21:56.514 }, 00:21:56.514 { 00:21:56.514 "name": "nvmf_tgt_poll_group_001", 00:21:56.514 "admin_qpairs": 0, 00:21:56.514 "io_qpairs": 1, 00:21:56.514 "current_admin_qpairs": 0, 00:21:56.514 "current_io_qpairs": 1, 00:21:56.514 "pending_bdev_io": 0, 00:21:56.514 "completed_nvme_io": 19209, 00:21:56.514 "transports": [ 00:21:56.514 { 00:21:56.514 "trtype": "TCP" 00:21:56.514 } 00:21:56.514 ] 00:21:56.514 }, 00:21:56.514 { 00:21:56.514 "name": "nvmf_tgt_poll_group_002", 00:21:56.514 "admin_qpairs": 0, 00:21:56.514 "io_qpairs": 1, 00:21:56.514 "current_admin_qpairs": 0, 00:21:56.514 "current_io_qpairs": 1, 00:21:56.514 "pending_bdev_io": 0, 00:21:56.514 "completed_nvme_io": 19021, 00:21:56.514 "transports": [ 00:21:56.514 { 00:21:56.514 "trtype": "TCP" 00:21:56.514 } 00:21:56.514 ] 00:21:56.514 }, 00:21:56.514 { 00:21:56.514 "name": "nvmf_tgt_poll_group_003", 00:21:56.514 "admin_qpairs": 0, 00:21:56.514 "io_qpairs": 1, 00:21:56.514 "current_admin_qpairs": 0, 00:21:56.514 "current_io_qpairs": 1, 00:21:56.514 "pending_bdev_io": 0, 00:21:56.514 "completed_nvme_io": 17128, 00:21:56.514 "transports": [ 00:21:56.514 { 00:21:56.514 "trtype": "TCP" 00:21:56.514 } 00:21:56.514 ] 00:21:56.514 } 00:21:56.514 ] 00:21:56.514 }' 00:21:56.514 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:56.514 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:56.514 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:56.514 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:56.514 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2462656 00:22:04.647 Initializing NVMe Controllers 00:22:04.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:04.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:04.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:04.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:04.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:04.647 Initialization complete. Launching workers. 00:22:04.647 ======================================================== 00:22:04.647 Latency(us) 00:22:04.647 Device Information : IOPS MiB/s Average min max 00:22:04.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12648.20 49.41 5060.07 1520.12 11373.42 00:22:04.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13861.50 54.15 4616.98 1168.32 13461.06 00:22:04.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13492.70 52.71 4742.63 1334.19 13151.23 00:22:04.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13022.40 50.87 4913.87 991.21 11313.61 00:22:04.647 ======================================================== 00:22:04.647 Total : 53024.79 207.13 4827.56 991.21 13461.06 00:22:04.647 00:22:04.647 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:04.647 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:04.647 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:04.647 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:04.647 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:04.647 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:04.647 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:04.647 rmmod nvme_tcp 00:22:04.647 rmmod nvme_fabrics 00:22:04.647 rmmod nvme_keyring 00:22:04.647 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:04.647 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:04.647 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:04.647 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2462307 ']' 00:22:04.648 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2462307 00:22:04.648 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 2462307 ']' 00:22:04.648 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 2462307 00:22:04.648 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:22:04.648 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:04.648 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2462307 00:22:04.908 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:04.908 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:04.908 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2462307' 00:22:04.908 killing process with pid 2462307 00:22:04.908 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 2462307 00:22:04.908 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 2462307 00:22:04.908 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:04.908 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:04.908 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:04.908 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:04.908 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:04.908 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:04.908 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:04.908 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:04.908 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:04.908 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.908 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.908 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.451 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:07.451 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:07.451 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:07.451 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:08.836 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:11.384 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:16.673 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:16.673 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:16.673 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.673 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:16.673 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:16.673 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:16.673 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.673 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.673 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.673 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:16.673 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:16.673 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:16.673 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.673 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:16.673 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:16.673 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:16.673 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:16.673 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:16.674 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:16.674 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:16.674 Found net devices under 0000:31:00.0: cvl_0_0 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:16.674 Found net devices under 0000:31:00.1: cvl_0_1 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:16.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:22:16.674 00:22:16.674 --- 10.0.0.2 ping statistics --- 00:22:16.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.674 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:16.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:22:16.674 00:22:16.674 --- 10.0.0.1 ping statistics --- 00:22:16.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.674 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:16.674 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:16.675 net.core.busy_poll = 1 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:16.675 net.core.busy_read = 1 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2467118 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2467118 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 2467118 ']' 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:16.675 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.675 [2024-11-06 14:04:02.773880] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:22:16.675 [2024-11-06 14:04:02.773948] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.675 [2024-11-06 14:04:02.876085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:16.675 [2024-11-06 14:04:02.928553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.675 [2024-11-06 14:04:02.928600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.675 [2024-11-06 14:04:02.928609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.675 [2024-11-06 14:04:02.928622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.675 [2024-11-06 14:04:02.928628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.675 [2024-11-06 14:04:02.931051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.675 [2024-11-06 14:04:02.931217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.675 [2024-11-06 14:04:02.931382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.675 [2024-11-06 14:04:02.931382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 [2024-11-06 14:04:03.801041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 Malloc1 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 [2024-11-06 14:04:03.874555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2467471 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:17.619 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:20.163 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:20.163 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.163 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:20.163 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.163 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:20.163 "tick_rate": 2400000000, 00:22:20.163 "poll_groups": [ 00:22:20.163 { 00:22:20.163 "name": "nvmf_tgt_poll_group_000", 00:22:20.163 "admin_qpairs": 1, 00:22:20.163 "io_qpairs": 2, 00:22:20.163 "current_admin_qpairs": 1, 00:22:20.163 "current_io_qpairs": 2, 00:22:20.163 "pending_bdev_io": 0, 00:22:20.163 "completed_nvme_io": 24910, 00:22:20.163 "transports": [ 00:22:20.163 { 00:22:20.163 "trtype": "TCP" 00:22:20.163 } 00:22:20.163 ] 00:22:20.163 }, 00:22:20.163 { 00:22:20.163 "name": "nvmf_tgt_poll_group_001", 00:22:20.163 "admin_qpairs": 0, 00:22:20.163 "io_qpairs": 2, 00:22:20.163 "current_admin_qpairs": 0, 00:22:20.163 "current_io_qpairs": 2, 00:22:20.163 "pending_bdev_io": 0, 00:22:20.163 "completed_nvme_io": 28660, 00:22:20.163 "transports": [ 00:22:20.163 { 00:22:20.163 "trtype": "TCP" 00:22:20.163 } 00:22:20.163 ] 00:22:20.163 }, 00:22:20.163 { 00:22:20.163 "name": "nvmf_tgt_poll_group_002", 00:22:20.163 "admin_qpairs": 0, 00:22:20.163 "io_qpairs": 0, 00:22:20.163 "current_admin_qpairs": 0, 00:22:20.163 "current_io_qpairs": 0, 00:22:20.163 "pending_bdev_io": 0, 00:22:20.163 "completed_nvme_io": 0, 00:22:20.163 "transports": [ 00:22:20.163 { 00:22:20.163 "trtype": "TCP" 00:22:20.163 } 00:22:20.163 ] 00:22:20.163 }, 00:22:20.163 { 00:22:20.163 "name": "nvmf_tgt_poll_group_003", 00:22:20.163 "admin_qpairs": 0, 00:22:20.163 "io_qpairs": 0, 00:22:20.163 "current_admin_qpairs": 0, 00:22:20.163 "current_io_qpairs": 0, 00:22:20.163 "pending_bdev_io": 0, 00:22:20.163 "completed_nvme_io": 0, 00:22:20.163 "transports": [ 00:22:20.163 { 00:22:20.163 "trtype": "TCP" 00:22:20.163 } 00:22:20.163 ] 00:22:20.163 } 00:22:20.163 ] 00:22:20.163 }' 00:22:20.163 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:20.163 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:20.163 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:20.163 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:20.163 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2467471 00:22:28.351 Initializing NVMe Controllers 00:22:28.351 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:28.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:28.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:28.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:28.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:28.351 Initialization complete. Launching workers. 00:22:28.351 ======================================================== 00:22:28.351 Latency(us) 00:22:28.351 Device Information : IOPS MiB/s Average min max 00:22:28.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9582.79 37.43 6679.01 945.46 54409.28 00:22:28.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9460.79 36.96 6764.48 1115.76 53364.25 00:22:28.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10883.89 42.52 5880.49 851.96 53481.28 00:22:28.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7741.80 30.24 8268.38 1321.30 53024.32 00:22:28.351 ======================================================== 00:22:28.351 Total : 37669.28 147.15 6796.40 851.96 54409.28 00:22:28.351 00:22:28.351 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:28.351 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:28.351 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:28.351 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:28.352 rmmod nvme_tcp 00:22:28.352 rmmod nvme_fabrics 00:22:28.352 rmmod nvme_keyring 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2467118 ']' 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2467118 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 2467118 ']' 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 2467118 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2467118 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2467118' 00:22:28.352 killing process with pid 2467118 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 2467118 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 2467118 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.352 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:31.683 00:22:31.683 real 0m54.692s 00:22:31.683 user 2m49.537s 00:22:31.683 sys 0m11.861s 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.683 ************************************ 00:22:31.683 END TEST nvmf_perf_adq 00:22:31.683 ************************************ 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:31.683 ************************************ 00:22:31.683 START TEST nvmf_shutdown 00:22:31.683 ************************************ 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:31.683 * Looking for test storage... 00:22:31.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:31.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.683 --rc genhtml_branch_coverage=1 00:22:31.683 --rc genhtml_function_coverage=1 00:22:31.683 --rc genhtml_legend=1 00:22:31.683 --rc geninfo_all_blocks=1 00:22:31.683 --rc geninfo_unexecuted_blocks=1 00:22:31.683 00:22:31.683 ' 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:31.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.683 --rc genhtml_branch_coverage=1 00:22:31.683 --rc genhtml_function_coverage=1 00:22:31.683 --rc genhtml_legend=1 00:22:31.683 --rc geninfo_all_blocks=1 00:22:31.683 --rc geninfo_unexecuted_blocks=1 00:22:31.683 00:22:31.683 ' 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:31.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.683 --rc genhtml_branch_coverage=1 00:22:31.683 --rc genhtml_function_coverage=1 00:22:31.683 --rc genhtml_legend=1 00:22:31.683 --rc geninfo_all_blocks=1 00:22:31.683 --rc geninfo_unexecuted_blocks=1 00:22:31.683 00:22:31.683 ' 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:31.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.683 --rc genhtml_branch_coverage=1 00:22:31.683 --rc genhtml_function_coverage=1 00:22:31.683 --rc genhtml_legend=1 00:22:31.683 --rc geninfo_all_blocks=1 00:22:31.683 --rc geninfo_unexecuted_blocks=1 00:22:31.683 00:22:31.683 ' 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:31.683 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:31.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:31.684 ************************************ 00:22:31.684 START TEST nvmf_shutdown_tc1 00:22:31.684 ************************************ 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:31.684 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.829 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:39.830 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:39.830 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:39.830 Found net devices under 0000:31:00.0: cvl_0_0 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:39.830 Found net devices under 0000:31:00.1: cvl_0_1 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:39.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:22:39.830 00:22:39.830 --- 10.0.0.2 ping statistics --- 00:22:39.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.830 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:22:39.830 00:22:39.830 --- 10.0.0.1 ping statistics --- 00:22:39.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.830 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2473976 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2473976 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 2473976 ']' 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.830 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:39.831 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.831 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:39.831 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.831 [2024-11-06 14:04:25.511000] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:22:39.831 [2024-11-06 14:04:25.511063] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.831 [2024-11-06 14:04:25.613549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.831 [2024-11-06 14:04:25.665970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.831 [2024-11-06 14:04:25.666019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.831 [2024-11-06 14:04:25.666029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.831 [2024-11-06 14:04:25.666037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.831 [2024-11-06 14:04:25.666044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.831 [2024-11-06 14:04:25.668450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.831 [2024-11-06 14:04:25.668610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.831 [2024-11-06 14:04:25.668787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:39.831 [2024-11-06 14:04:25.668788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.092 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:40.093 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:40.093 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:40.093 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:40.093 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.355 [2024-11-06 14:04:26.394213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.355 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.355 Malloc1 00:22:40.355 [2024-11-06 14:04:26.522724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.355 Malloc2 00:22:40.355 Malloc3 00:22:40.617 Malloc4 00:22:40.617 Malloc5 00:22:40.617 Malloc6 00:22:40.617 Malloc7 00:22:40.617 Malloc8 00:22:40.617 Malloc9 00:22:40.878 Malloc10 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2474359 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2474359 /var/tmp/bdevperf.sock 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 2474359 ']' 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.878 { 00:22:40.878 "params": { 00:22:40.878 "name": "Nvme$subsystem", 00:22:40.878 "trtype": "$TEST_TRANSPORT", 00:22:40.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.878 "adrfam": "ipv4", 00:22:40.878 "trsvcid": "$NVMF_PORT", 00:22:40.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.878 "hdgst": ${hdgst:-false}, 00:22:40.878 "ddgst": ${ddgst:-false} 00:22:40.878 }, 00:22:40.878 "method": "bdev_nvme_attach_controller" 00:22:40.878 } 00:22:40.878 EOF 00:22:40.878 )") 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.878 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.878 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.878 { 00:22:40.878 "params": { 00:22:40.879 "name": "Nvme$subsystem", 00:22:40.879 "trtype": "$TEST_TRANSPORT", 00:22:40.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.879 "adrfam": "ipv4", 00:22:40.879 "trsvcid": "$NVMF_PORT", 00:22:40.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.879 "hdgst": ${hdgst:-false}, 00:22:40.879 "ddgst": ${ddgst:-false} 00:22:40.879 }, 00:22:40.879 "method": "bdev_nvme_attach_controller" 00:22:40.879 } 00:22:40.879 EOF 00:22:40.879 )") 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.879 { 00:22:40.879 "params": { 00:22:40.879 "name": "Nvme$subsystem", 00:22:40.879 "trtype": "$TEST_TRANSPORT", 00:22:40.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.879 "adrfam": "ipv4", 00:22:40.879 "trsvcid": "$NVMF_PORT", 00:22:40.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.879 "hdgst": ${hdgst:-false}, 00:22:40.879 "ddgst": ${ddgst:-false} 00:22:40.879 }, 00:22:40.879 "method": "bdev_nvme_attach_controller" 00:22:40.879 } 00:22:40.879 EOF 00:22:40.879 )") 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.879 { 00:22:40.879 "params": { 00:22:40.879 "name": "Nvme$subsystem", 00:22:40.879 "trtype": "$TEST_TRANSPORT", 00:22:40.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.879 "adrfam": "ipv4", 00:22:40.879 "trsvcid": "$NVMF_PORT", 00:22:40.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.879 "hdgst": ${hdgst:-false}, 00:22:40.879 "ddgst": ${ddgst:-false} 00:22:40.879 }, 00:22:40.879 "method": "bdev_nvme_attach_controller" 00:22:40.879 } 00:22:40.879 EOF 00:22:40.879 )") 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.879 { 00:22:40.879 "params": { 00:22:40.879 "name": "Nvme$subsystem", 00:22:40.879 "trtype": "$TEST_TRANSPORT", 00:22:40.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.879 "adrfam": "ipv4", 00:22:40.879 "trsvcid": "$NVMF_PORT", 00:22:40.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.879 "hdgst": ${hdgst:-false}, 00:22:40.879 "ddgst": ${ddgst:-false} 00:22:40.879 }, 00:22:40.879 "method": "bdev_nvme_attach_controller" 00:22:40.879 } 00:22:40.879 EOF 00:22:40.879 )") 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.879 { 00:22:40.879 "params": { 00:22:40.879 "name": "Nvme$subsystem", 00:22:40.879 "trtype": "$TEST_TRANSPORT", 00:22:40.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.879 "adrfam": "ipv4", 00:22:40.879 "trsvcid": "$NVMF_PORT", 00:22:40.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.879 "hdgst": ${hdgst:-false}, 00:22:40.879 "ddgst": ${ddgst:-false} 00:22:40.879 }, 00:22:40.879 "method": "bdev_nvme_attach_controller" 00:22:40.879 } 00:22:40.879 EOF 00:22:40.879 )") 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.879 [2024-11-06 14:04:27.039170] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:22:40.879 [2024-11-06 14:04:27.039242] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.879 { 00:22:40.879 "params": { 00:22:40.879 "name": "Nvme$subsystem", 00:22:40.879 "trtype": "$TEST_TRANSPORT", 00:22:40.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.879 "adrfam": "ipv4", 00:22:40.879 "trsvcid": "$NVMF_PORT", 00:22:40.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.879 "hdgst": ${hdgst:-false}, 00:22:40.879 "ddgst": ${ddgst:-false} 00:22:40.879 }, 00:22:40.879 "method": "bdev_nvme_attach_controller" 00:22:40.879 } 00:22:40.879 EOF 00:22:40.879 )") 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.879 { 00:22:40.879 "params": { 00:22:40.879 "name": "Nvme$subsystem", 00:22:40.879 "trtype": "$TEST_TRANSPORT", 00:22:40.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.879 "adrfam": "ipv4", 00:22:40.879 "trsvcid": "$NVMF_PORT", 00:22:40.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.879 "hdgst": ${hdgst:-false}, 00:22:40.879 "ddgst": ${ddgst:-false} 00:22:40.879 }, 00:22:40.879 "method": "bdev_nvme_attach_controller" 00:22:40.879 } 00:22:40.879 EOF 00:22:40.879 )") 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.879 { 00:22:40.879 "params": { 00:22:40.879 "name": "Nvme$subsystem", 00:22:40.879 "trtype": "$TEST_TRANSPORT", 00:22:40.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.879 "adrfam": "ipv4", 00:22:40.879 "trsvcid": "$NVMF_PORT", 00:22:40.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.879 "hdgst": ${hdgst:-false}, 00:22:40.879 "ddgst": ${ddgst:-false} 00:22:40.879 }, 00:22:40.879 "method": "bdev_nvme_attach_controller" 00:22:40.879 } 00:22:40.879 EOF 00:22:40.879 )") 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.879 { 00:22:40.879 "params": { 00:22:40.879 "name": "Nvme$subsystem", 00:22:40.879 "trtype": "$TEST_TRANSPORT", 00:22:40.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.879 "adrfam": "ipv4", 00:22:40.879 "trsvcid": "$NVMF_PORT", 00:22:40.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.879 "hdgst": ${hdgst:-false}, 00:22:40.879 "ddgst": ${ddgst:-false} 00:22:40.879 }, 00:22:40.879 "method": "bdev_nvme_attach_controller" 00:22:40.879 } 00:22:40.879 EOF 00:22:40.879 )") 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:40.879 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:40.879 "params": { 00:22:40.879 "name": "Nvme1", 00:22:40.879 "trtype": "tcp", 00:22:40.879 "traddr": "10.0.0.2", 00:22:40.879 "adrfam": "ipv4", 00:22:40.879 "trsvcid": "4420", 00:22:40.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.879 "hdgst": false, 00:22:40.879 "ddgst": false 00:22:40.879 }, 00:22:40.879 "method": "bdev_nvme_attach_controller" 00:22:40.879 },{ 00:22:40.879 "params": { 00:22:40.879 "name": "Nvme2", 00:22:40.879 "trtype": "tcp", 00:22:40.879 "traddr": "10.0.0.2", 00:22:40.879 "adrfam": "ipv4", 00:22:40.879 "trsvcid": "4420", 00:22:40.879 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:40.879 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:40.879 "hdgst": false, 00:22:40.879 "ddgst": false 00:22:40.879 }, 00:22:40.879 "method": "bdev_nvme_attach_controller" 00:22:40.879 },{ 00:22:40.879 "params": { 00:22:40.879 "name": "Nvme3", 00:22:40.879 "trtype": "tcp", 00:22:40.879 "traddr": "10.0.0.2", 00:22:40.879 "adrfam": "ipv4", 00:22:40.879 "trsvcid": "4420", 00:22:40.879 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:40.879 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:40.879 "hdgst": false, 00:22:40.880 "ddgst": false 00:22:40.880 }, 00:22:40.880 "method": "bdev_nvme_attach_controller" 00:22:40.880 },{ 00:22:40.880 "params": { 00:22:40.880 "name": "Nvme4", 00:22:40.880 "trtype": "tcp", 00:22:40.880 "traddr": "10.0.0.2", 00:22:40.880 "adrfam": "ipv4", 00:22:40.880 "trsvcid": "4420", 00:22:40.880 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:40.880 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:40.880 "hdgst": false, 00:22:40.880 "ddgst": false 00:22:40.880 }, 00:22:40.880 "method": "bdev_nvme_attach_controller" 00:22:40.880 },{ 00:22:40.880 "params": { 00:22:40.880 "name": "Nvme5", 00:22:40.880 "trtype": "tcp", 00:22:40.880 "traddr": "10.0.0.2", 00:22:40.880 "adrfam": "ipv4", 00:22:40.880 "trsvcid": "4420", 00:22:40.880 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:40.880 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:40.880 "hdgst": false, 00:22:40.880 "ddgst": false 00:22:40.880 }, 00:22:40.880 "method": "bdev_nvme_attach_controller" 00:22:40.880 },{ 00:22:40.880 "params": { 00:22:40.880 "name": "Nvme6", 00:22:40.880 "trtype": "tcp", 00:22:40.880 "traddr": "10.0.0.2", 00:22:40.880 "adrfam": "ipv4", 00:22:40.880 "trsvcid": "4420", 00:22:40.880 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:40.880 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:40.880 "hdgst": false, 00:22:40.880 "ddgst": false 00:22:40.880 }, 00:22:40.880 "method": "bdev_nvme_attach_controller" 00:22:40.880 },{ 00:22:40.880 "params": { 00:22:40.880 "name": "Nvme7", 00:22:40.880 "trtype": "tcp", 00:22:40.880 "traddr": "10.0.0.2", 00:22:40.880 "adrfam": "ipv4", 00:22:40.880 "trsvcid": "4420", 00:22:40.880 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:40.880 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:40.880 "hdgst": false, 00:22:40.880 "ddgst": false 00:22:40.880 }, 00:22:40.880 "method": "bdev_nvme_attach_controller" 00:22:40.880 },{ 00:22:40.880 "params": { 00:22:40.880 "name": "Nvme8", 00:22:40.880 "trtype": "tcp", 00:22:40.880 "traddr": "10.0.0.2", 00:22:40.880 "adrfam": "ipv4", 00:22:40.880 "trsvcid": "4420", 00:22:40.880 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:40.880 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:40.880 "hdgst": false, 00:22:40.880 "ddgst": false 00:22:40.880 }, 00:22:40.880 "method": "bdev_nvme_attach_controller" 00:22:40.880 },{ 00:22:40.880 "params": { 00:22:40.880 "name": "Nvme9", 00:22:40.880 "trtype": "tcp", 00:22:40.880 "traddr": "10.0.0.2", 00:22:40.880 "adrfam": "ipv4", 00:22:40.880 "trsvcid": "4420", 00:22:40.880 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:40.880 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:40.880 "hdgst": false, 00:22:40.880 "ddgst": false 00:22:40.880 }, 00:22:40.880 "method": "bdev_nvme_attach_controller" 00:22:40.880 },{ 00:22:40.880 "params": { 00:22:40.880 "name": "Nvme10", 00:22:40.880 "trtype": "tcp", 00:22:40.880 "traddr": "10.0.0.2", 00:22:40.880 "adrfam": "ipv4", 00:22:40.880 "trsvcid": "4420", 00:22:40.880 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:40.880 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:40.880 "hdgst": false, 00:22:40.880 "ddgst": false 00:22:40.880 }, 00:22:40.880 "method": "bdev_nvme_attach_controller" 00:22:40.880 }' 00:22:40.880 [2024-11-06 14:04:27.136579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.141 [2024-11-06 14:04:27.189115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.525 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:42.525 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:42.525 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:42.525 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.525 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.525 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.525 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2474359 00:22:42.525 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:42.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2474359 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:42.525 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:43.467 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2473976 00:22:43.467 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:43.467 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:43.467 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:43.467 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:43.467 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.467 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.467 { 00:22:43.467 "params": { 00:22:43.467 "name": "Nvme$subsystem", 00:22:43.467 "trtype": "$TEST_TRANSPORT", 00:22:43.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.467 "adrfam": "ipv4", 00:22:43.467 "trsvcid": "$NVMF_PORT", 00:22:43.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.467 "hdgst": ${hdgst:-false}, 00:22:43.467 "ddgst": ${ddgst:-false} 00:22:43.467 }, 00:22:43.467 "method": "bdev_nvme_attach_controller" 00:22:43.467 } 00:22:43.467 EOF 00:22:43.467 )") 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.468 { 00:22:43.468 "params": { 00:22:43.468 "name": "Nvme$subsystem", 00:22:43.468 "trtype": "$TEST_TRANSPORT", 00:22:43.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.468 "adrfam": "ipv4", 00:22:43.468 "trsvcid": "$NVMF_PORT", 00:22:43.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.468 "hdgst": ${hdgst:-false}, 00:22:43.468 "ddgst": ${ddgst:-false} 00:22:43.468 }, 00:22:43.468 "method": "bdev_nvme_attach_controller" 00:22:43.468 } 00:22:43.468 EOF 00:22:43.468 )") 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.468 { 00:22:43.468 "params": { 00:22:43.468 "name": "Nvme$subsystem", 00:22:43.468 "trtype": "$TEST_TRANSPORT", 00:22:43.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.468 "adrfam": "ipv4", 00:22:43.468 "trsvcid": "$NVMF_PORT", 00:22:43.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.468 "hdgst": ${hdgst:-false}, 00:22:43.468 "ddgst": ${ddgst:-false} 00:22:43.468 }, 00:22:43.468 "method": "bdev_nvme_attach_controller" 00:22:43.468 } 00:22:43.468 EOF 00:22:43.468 )") 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.468 { 00:22:43.468 "params": { 00:22:43.468 "name": "Nvme$subsystem", 00:22:43.468 "trtype": "$TEST_TRANSPORT", 00:22:43.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.468 "adrfam": "ipv4", 00:22:43.468 "trsvcid": "$NVMF_PORT", 00:22:43.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.468 "hdgst": ${hdgst:-false}, 00:22:43.468 "ddgst": ${ddgst:-false} 00:22:43.468 }, 00:22:43.468 "method": "bdev_nvme_attach_controller" 00:22:43.468 } 00:22:43.468 EOF 00:22:43.468 )") 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.468 { 00:22:43.468 "params": { 00:22:43.468 "name": "Nvme$subsystem", 00:22:43.468 "trtype": "$TEST_TRANSPORT", 00:22:43.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.468 "adrfam": "ipv4", 00:22:43.468 "trsvcid": "$NVMF_PORT", 00:22:43.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.468 "hdgst": ${hdgst:-false}, 00:22:43.468 "ddgst": ${ddgst:-false} 00:22:43.468 }, 00:22:43.468 "method": "bdev_nvme_attach_controller" 00:22:43.468 } 00:22:43.468 EOF 00:22:43.468 )") 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.468 { 00:22:43.468 "params": { 00:22:43.468 "name": "Nvme$subsystem", 00:22:43.468 "trtype": "$TEST_TRANSPORT", 00:22:43.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.468 "adrfam": "ipv4", 00:22:43.468 "trsvcid": "$NVMF_PORT", 00:22:43.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.468 "hdgst": ${hdgst:-false}, 00:22:43.468 "ddgst": ${ddgst:-false} 00:22:43.468 }, 00:22:43.468 "method": "bdev_nvme_attach_controller" 00:22:43.468 } 00:22:43.468 EOF 00:22:43.468 )") 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.468 { 00:22:43.468 "params": { 00:22:43.468 "name": "Nvme$subsystem", 00:22:43.468 "trtype": "$TEST_TRANSPORT", 00:22:43.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.468 "adrfam": "ipv4", 00:22:43.468 "trsvcid": "$NVMF_PORT", 00:22:43.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.468 "hdgst": ${hdgst:-false}, 00:22:43.468 "ddgst": ${ddgst:-false} 00:22:43.468 }, 00:22:43.468 "method": "bdev_nvme_attach_controller" 00:22:43.468 } 00:22:43.468 EOF 00:22:43.468 )") 00:22:43.468 [2024-11-06 14:04:29.435274] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:22:43.468 [2024-11-06 14:04:29.435329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474732 ] 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.468 { 00:22:43.468 "params": { 00:22:43.468 "name": "Nvme$subsystem", 00:22:43.468 "trtype": "$TEST_TRANSPORT", 00:22:43.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.468 "adrfam": "ipv4", 00:22:43.468 "trsvcid": "$NVMF_PORT", 00:22:43.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.468 "hdgst": ${hdgst:-false}, 00:22:43.468 "ddgst": ${ddgst:-false} 00:22:43.468 }, 00:22:43.468 "method": "bdev_nvme_attach_controller" 00:22:43.468 } 00:22:43.468 EOF 00:22:43.468 )") 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.468 { 00:22:43.468 "params": { 00:22:43.468 "name": "Nvme$subsystem", 00:22:43.468 "trtype": "$TEST_TRANSPORT", 00:22:43.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.468 "adrfam": "ipv4", 00:22:43.468 "trsvcid": "$NVMF_PORT", 00:22:43.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.468 "hdgst": ${hdgst:-false}, 00:22:43.468 "ddgst": ${ddgst:-false} 00:22:43.468 }, 00:22:43.468 "method": "bdev_nvme_attach_controller" 00:22:43.468 } 00:22:43.468 EOF 00:22:43.468 )") 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.468 { 00:22:43.468 "params": { 00:22:43.468 "name": "Nvme$subsystem", 00:22:43.468 "trtype": "$TEST_TRANSPORT", 00:22:43.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.468 "adrfam": "ipv4", 00:22:43.468 "trsvcid": "$NVMF_PORT", 00:22:43.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.468 "hdgst": ${hdgst:-false}, 00:22:43.468 "ddgst": ${ddgst:-false} 00:22:43.468 }, 00:22:43.468 "method": "bdev_nvme_attach_controller" 00:22:43.468 } 00:22:43.468 EOF 00:22:43.468 )") 00:22:43.468 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.469 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:43.469 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:43.469 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:43.469 "params": { 00:22:43.469 "name": "Nvme1", 00:22:43.469 "trtype": "tcp", 00:22:43.469 "traddr": "10.0.0.2", 00:22:43.469 "adrfam": "ipv4", 00:22:43.469 "trsvcid": "4420", 00:22:43.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.469 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.469 "hdgst": false, 00:22:43.469 "ddgst": false 00:22:43.469 }, 00:22:43.469 "method": "bdev_nvme_attach_controller" 00:22:43.469 },{ 00:22:43.469 "params": { 00:22:43.469 "name": "Nvme2", 00:22:43.469 "trtype": "tcp", 00:22:43.469 "traddr": "10.0.0.2", 00:22:43.469 "adrfam": "ipv4", 00:22:43.469 "trsvcid": "4420", 00:22:43.469 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:43.469 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:43.469 "hdgst": false, 00:22:43.469 "ddgst": false 00:22:43.469 }, 00:22:43.469 "method": "bdev_nvme_attach_controller" 00:22:43.469 },{ 00:22:43.469 "params": { 00:22:43.469 "name": "Nvme3", 00:22:43.469 "trtype": "tcp", 00:22:43.469 "traddr": "10.0.0.2", 00:22:43.469 "adrfam": "ipv4", 00:22:43.469 "trsvcid": "4420", 00:22:43.469 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:43.469 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:43.469 "hdgst": false, 00:22:43.469 "ddgst": false 00:22:43.469 }, 00:22:43.469 "method": "bdev_nvme_attach_controller" 00:22:43.469 },{ 00:22:43.469 "params": { 00:22:43.469 "name": "Nvme4", 00:22:43.469 "trtype": "tcp", 00:22:43.469 "traddr": "10.0.0.2", 00:22:43.469 "adrfam": "ipv4", 00:22:43.469 "trsvcid": "4420", 00:22:43.469 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:43.469 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:43.469 "hdgst": false, 00:22:43.469 "ddgst": false 00:22:43.469 }, 00:22:43.469 "method": "bdev_nvme_attach_controller" 00:22:43.469 },{ 00:22:43.469 "params": { 00:22:43.469 "name": "Nvme5", 00:22:43.469 "trtype": "tcp", 00:22:43.469 "traddr": "10.0.0.2", 00:22:43.469 "adrfam": "ipv4", 00:22:43.469 "trsvcid": "4420", 00:22:43.469 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:43.469 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:43.469 "hdgst": false, 00:22:43.469 "ddgst": false 00:22:43.469 }, 00:22:43.469 "method": "bdev_nvme_attach_controller" 00:22:43.469 },{ 00:22:43.469 "params": { 00:22:43.469 "name": "Nvme6", 00:22:43.469 "trtype": "tcp", 00:22:43.469 "traddr": "10.0.0.2", 00:22:43.469 "adrfam": "ipv4", 00:22:43.469 "trsvcid": "4420", 00:22:43.469 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:43.469 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:43.469 "hdgst": false, 00:22:43.469 "ddgst": false 00:22:43.469 }, 00:22:43.469 "method": "bdev_nvme_attach_controller" 00:22:43.469 },{ 00:22:43.469 "params": { 00:22:43.469 "name": "Nvme7", 00:22:43.469 "trtype": "tcp", 00:22:43.469 "traddr": "10.0.0.2", 00:22:43.469 "adrfam": "ipv4", 00:22:43.469 "trsvcid": "4420", 00:22:43.469 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:43.469 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:43.469 "hdgst": false, 00:22:43.469 "ddgst": false 00:22:43.469 }, 00:22:43.469 "method": "bdev_nvme_attach_controller" 00:22:43.469 },{ 00:22:43.469 "params": { 00:22:43.469 "name": "Nvme8", 00:22:43.469 "trtype": "tcp", 00:22:43.469 "traddr": "10.0.0.2", 00:22:43.469 "adrfam": "ipv4", 00:22:43.469 "trsvcid": "4420", 00:22:43.469 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:43.469 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:43.469 "hdgst": false, 00:22:43.469 "ddgst": false 00:22:43.469 }, 00:22:43.469 "method": "bdev_nvme_attach_controller" 00:22:43.469 },{ 00:22:43.469 "params": { 00:22:43.469 "name": "Nvme9", 00:22:43.469 "trtype": "tcp", 00:22:43.469 "traddr": "10.0.0.2", 00:22:43.469 "adrfam": "ipv4", 00:22:43.469 "trsvcid": "4420", 00:22:43.469 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:43.469 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:43.469 "hdgst": false, 00:22:43.469 "ddgst": false 00:22:43.469 }, 00:22:43.469 "method": "bdev_nvme_attach_controller" 00:22:43.469 },{ 00:22:43.469 "params": { 00:22:43.469 "name": "Nvme10", 00:22:43.469 "trtype": "tcp", 00:22:43.469 "traddr": "10.0.0.2", 00:22:43.469 "adrfam": "ipv4", 00:22:43.469 "trsvcid": "4420", 00:22:43.469 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:43.469 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:43.469 "hdgst": false, 00:22:43.469 "ddgst": false 00:22:43.469 }, 00:22:43.469 "method": "bdev_nvme_attach_controller" 00:22:43.469 }' 00:22:43.469 [2024-11-06 14:04:29.524537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.469 [2024-11-06 14:04:29.560087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.411 Running I/O for 1 seconds... 00:22:45.797 1860.00 IOPS, 116.25 MiB/s 00:22:45.797 Latency(us) 00:22:45.797 [2024-11-06T13:04:32.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.797 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.797 Verification LBA range: start 0x0 length 0x400 00:22:45.797 Nvme1n1 : 1.09 238.30 14.89 0.00 0.00 260087.81 18568.53 241172.48 00:22:45.797 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.797 Verification LBA range: start 0x0 length 0x400 00:22:45.797 Nvme2n1 : 1.10 232.37 14.52 0.00 0.00 267593.17 15728.64 251658.24 00:22:45.797 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.797 Verification LBA range: start 0x0 length 0x400 00:22:45.797 Nvme3n1 : 1.06 240.97 15.06 0.00 0.00 253026.77 15073.28 258648.75 00:22:45.797 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.797 Verification LBA range: start 0x0 length 0x400 00:22:45.797 Nvme4n1 : 1.09 234.21 14.64 0.00 0.00 254436.69 21299.20 242920.11 00:22:45.797 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.797 Verification LBA range: start 0x0 length 0x400 00:22:45.797 Nvme5n1 : 1.11 231.11 14.44 0.00 0.00 254705.07 20316.16 248162.99 00:22:45.797 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.797 Verification LBA range: start 0x0 length 0x400 00:22:45.797 Nvme6n1 : 1.16 219.99 13.75 0.00 0.00 263826.77 15947.09 277872.64 00:22:45.797 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.797 Verification LBA range: start 0x0 length 0x400 00:22:45.797 Nvme7n1 : 1.17 273.60 17.10 0.00 0.00 208234.50 11250.35 251658.24 00:22:45.797 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.797 Verification LBA range: start 0x0 length 0x400 00:22:45.797 Nvme8n1 : 1.18 271.83 16.99 0.00 0.00 205684.65 12506.45 248162.99 00:22:45.797 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.797 Verification LBA range: start 0x0 length 0x400 00:22:45.797 Nvme9n1 : 1.17 223.94 14.00 0.00 0.00 243734.44 3850.24 246415.36 00:22:45.797 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.797 Verification LBA range: start 0x0 length 0x400 00:22:45.797 Nvme10n1 : 1.18 271.26 16.95 0.00 0.00 198887.00 13216.43 248162.99 00:22:45.797 [2024-11-06T13:04:32.077Z] =================================================================================================================== 00:22:45.797 [2024-11-06T13:04:32.077Z] Total : 2437.59 152.35 0.00 0.00 238498.42 3850.24 277872.64 00:22:45.797 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:45.797 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:45.797 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:45.797 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:45.797 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:45.797 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:45.797 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:45.797 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:45.797 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:45.797 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:45.797 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:45.797 rmmod nvme_tcp 00:22:45.797 rmmod nvme_fabrics 00:22:45.797 rmmod nvme_keyring 00:22:45.797 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:45.797 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:45.797 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:45.797 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2473976 ']' 00:22:45.797 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2473976 00:22:45.797 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 2473976 ']' 00:22:45.797 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 2473976 00:22:45.797 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:22:45.797 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:45.797 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2473976 00:22:46.058 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:46.058 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:46.058 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2473976' 00:22:46.058 killing process with pid 2473976 00:22:46.058 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 2473976 00:22:46.058 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 2473976 00:22:46.319 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:46.319 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:46.319 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:46.319 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:46.319 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:46.319 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:46.319 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:46.319 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:46.319 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:46.319 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.320 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.320 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.234 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:48.234 00:22:48.234 real 0m16.696s 00:22:48.234 user 0m32.405s 00:22:48.234 sys 0m7.048s 00:22:48.234 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:48.234 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:48.234 ************************************ 00:22:48.234 END TEST nvmf_shutdown_tc1 00:22:48.234 ************************************ 00:22:48.234 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:48.234 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:48.234 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:48.234 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:48.495 ************************************ 00:22:48.495 START TEST nvmf_shutdown_tc2 00:22:48.495 ************************************ 00:22:48.495 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:22:48.495 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:48.495 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:48.495 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:48.495 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.495 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:48.495 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:48.495 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:48.495 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.495 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.495 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.495 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:48.495 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:48.495 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:48.495 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:48.495 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.495 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:48.496 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:48.496 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:48.496 Found net devices under 0000:31:00.0: cvl_0_0 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:48.496 Found net devices under 0000:31:00.1: cvl_0_1 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:48.496 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:48.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:22:48.758 00:22:48.758 --- 10.0.0.2 ping statistics --- 00:22:48.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.758 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:22:48.758 00:22:48.758 --- 10.0.0.1 ping statistics --- 00:22:48.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.758 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2475847 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2475847 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2475847 ']' 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:48.758 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:48.758 [2024-11-06 14:04:34.988478] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:22:48.758 [2024-11-06 14:04:34.988543] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.020 [2024-11-06 14:04:35.086839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.020 [2024-11-06 14:04:35.121458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.020 [2024-11-06 14:04:35.121489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.020 [2024-11-06 14:04:35.121496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.020 [2024-11-06 14:04:35.121502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.020 [2024-11-06 14:04:35.121507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.020 [2024-11-06 14:04:35.122778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.020 [2024-11-06 14:04:35.122942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.020 [2024-11-06 14:04:35.123093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.020 [2024-11-06 14:04:35.123094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.592 [2024-11-06 14:04:35.842590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.592 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.853 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.853 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.853 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.853 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.853 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.853 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.853 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.853 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.853 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.853 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.853 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.853 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.853 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.853 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.853 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:49.853 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.853 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.853 Malloc1 00:22:49.853 [2024-11-06 14:04:35.953433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.853 Malloc2 00:22:49.853 Malloc3 00:22:49.853 Malloc4 00:22:49.853 Malloc5 00:22:49.853 Malloc6 00:22:50.115 Malloc7 00:22:50.115 Malloc8 00:22:50.115 Malloc9 00:22:50.115 Malloc10 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2476231 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2476231 /var/tmp/bdevperf.sock 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2476231 ']' 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.115 { 00:22:50.115 "params": { 00:22:50.115 "name": "Nvme$subsystem", 00:22:50.115 "trtype": "$TEST_TRANSPORT", 00:22:50.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.115 "adrfam": "ipv4", 00:22:50.115 "trsvcid": "$NVMF_PORT", 00:22:50.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.115 "hdgst": ${hdgst:-false}, 00:22:50.115 "ddgst": ${ddgst:-false} 00:22:50.115 }, 00:22:50.115 "method": "bdev_nvme_attach_controller" 00:22:50.115 } 00:22:50.115 EOF 00:22:50.115 )") 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.115 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.115 { 00:22:50.115 "params": { 00:22:50.115 "name": "Nvme$subsystem", 00:22:50.115 "trtype": "$TEST_TRANSPORT", 00:22:50.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.115 "adrfam": "ipv4", 00:22:50.115 "trsvcid": "$NVMF_PORT", 00:22:50.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.115 "hdgst": ${hdgst:-false}, 00:22:50.116 "ddgst": ${ddgst:-false} 00:22:50.116 }, 00:22:50.116 "method": "bdev_nvme_attach_controller" 00:22:50.116 } 00:22:50.116 EOF 00:22:50.116 )") 00:22:50.116 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.116 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.116 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.116 { 00:22:50.116 "params": { 00:22:50.116 "name": "Nvme$subsystem", 00:22:50.116 "trtype": "$TEST_TRANSPORT", 00:22:50.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.116 "adrfam": "ipv4", 00:22:50.116 "trsvcid": "$NVMF_PORT", 00:22:50.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.116 "hdgst": ${hdgst:-false}, 00:22:50.116 "ddgst": ${ddgst:-false} 00:22:50.116 }, 00:22:50.116 "method": "bdev_nvme_attach_controller" 00:22:50.116 } 00:22:50.116 EOF 00:22:50.116 )") 00:22:50.116 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.116 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.116 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.116 { 00:22:50.116 "params": { 00:22:50.116 "name": "Nvme$subsystem", 00:22:50.116 "trtype": "$TEST_TRANSPORT", 00:22:50.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.116 "adrfam": "ipv4", 00:22:50.116 "trsvcid": "$NVMF_PORT", 00:22:50.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.116 "hdgst": ${hdgst:-false}, 00:22:50.116 "ddgst": ${ddgst:-false} 00:22:50.116 }, 00:22:50.116 "method": "bdev_nvme_attach_controller" 00:22:50.116 } 00:22:50.116 EOF 00:22:50.116 )") 00:22:50.116 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.116 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.116 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.116 { 00:22:50.116 "params": { 00:22:50.116 "name": "Nvme$subsystem", 00:22:50.116 "trtype": "$TEST_TRANSPORT", 00:22:50.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.116 "adrfam": "ipv4", 00:22:50.116 "trsvcid": "$NVMF_PORT", 00:22:50.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.116 "hdgst": ${hdgst:-false}, 00:22:50.116 "ddgst": ${ddgst:-false} 00:22:50.116 }, 00:22:50.116 "method": "bdev_nvme_attach_controller" 00:22:50.116 } 00:22:50.116 EOF 00:22:50.116 )") 00:22:50.116 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.116 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.116 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.116 { 00:22:50.116 "params": { 00:22:50.116 "name": "Nvme$subsystem", 00:22:50.116 "trtype": "$TEST_TRANSPORT", 00:22:50.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.116 "adrfam": "ipv4", 00:22:50.116 "trsvcid": "$NVMF_PORT", 00:22:50.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.116 "hdgst": ${hdgst:-false}, 00:22:50.116 "ddgst": ${ddgst:-false} 00:22:50.116 }, 00:22:50.116 "method": "bdev_nvme_attach_controller" 00:22:50.116 } 00:22:50.116 EOF 00:22:50.116 )") 00:22:50.378 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.378 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.378 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.378 { 00:22:50.378 "params": { 00:22:50.378 "name": "Nvme$subsystem", 00:22:50.378 "trtype": "$TEST_TRANSPORT", 00:22:50.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.378 "adrfam": "ipv4", 00:22:50.378 "trsvcid": "$NVMF_PORT", 00:22:50.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.378 "hdgst": ${hdgst:-false}, 00:22:50.378 "ddgst": ${ddgst:-false} 00:22:50.378 }, 00:22:50.378 "method": "bdev_nvme_attach_controller" 00:22:50.378 } 00:22:50.378 EOF 00:22:50.378 )") 00:22:50.378 [2024-11-06 14:04:36.399735] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:22:50.378 [2024-11-06 14:04:36.399790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2476231 ] 00:22:50.378 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.378 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.378 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.378 { 00:22:50.378 "params": { 00:22:50.378 "name": "Nvme$subsystem", 00:22:50.378 "trtype": "$TEST_TRANSPORT", 00:22:50.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.378 "adrfam": "ipv4", 00:22:50.378 "trsvcid": "$NVMF_PORT", 00:22:50.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.378 "hdgst": ${hdgst:-false}, 00:22:50.378 "ddgst": ${ddgst:-false} 00:22:50.378 }, 00:22:50.378 "method": "bdev_nvme_attach_controller" 00:22:50.378 } 00:22:50.378 EOF 00:22:50.378 )") 00:22:50.378 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.378 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.378 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.378 { 00:22:50.378 "params": { 00:22:50.378 "name": "Nvme$subsystem", 00:22:50.378 "trtype": "$TEST_TRANSPORT", 00:22:50.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.378 "adrfam": "ipv4", 00:22:50.378 "trsvcid": "$NVMF_PORT", 00:22:50.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.378 "hdgst": ${hdgst:-false}, 00:22:50.378 "ddgst": ${ddgst:-false} 00:22:50.378 }, 00:22:50.378 "method": "bdev_nvme_attach_controller" 00:22:50.378 } 00:22:50.378 EOF 00:22:50.378 )") 00:22:50.378 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.378 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.378 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.378 { 00:22:50.378 "params": { 00:22:50.378 "name": "Nvme$subsystem", 00:22:50.378 "trtype": "$TEST_TRANSPORT", 00:22:50.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.378 "adrfam": "ipv4", 00:22:50.378 "trsvcid": "$NVMF_PORT", 00:22:50.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.378 "hdgst": ${hdgst:-false}, 00:22:50.378 "ddgst": ${ddgst:-false} 00:22:50.378 }, 00:22:50.378 "method": "bdev_nvme_attach_controller" 00:22:50.378 } 00:22:50.378 EOF 00:22:50.378 )") 00:22:50.378 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.378 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:50.378 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:50.378 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:50.378 "params": { 00:22:50.378 "name": "Nvme1", 00:22:50.378 "trtype": "tcp", 00:22:50.378 "traddr": "10.0.0.2", 00:22:50.378 "adrfam": "ipv4", 00:22:50.378 "trsvcid": "4420", 00:22:50.378 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.378 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.378 "hdgst": false, 00:22:50.378 "ddgst": false 00:22:50.378 }, 00:22:50.378 "method": "bdev_nvme_attach_controller" 00:22:50.378 },{ 00:22:50.378 "params": { 00:22:50.378 "name": "Nvme2", 00:22:50.378 "trtype": "tcp", 00:22:50.378 "traddr": "10.0.0.2", 00:22:50.378 "adrfam": "ipv4", 00:22:50.378 "trsvcid": "4420", 00:22:50.378 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:50.378 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:50.378 "hdgst": false, 00:22:50.378 "ddgst": false 00:22:50.378 }, 00:22:50.378 "method": "bdev_nvme_attach_controller" 00:22:50.378 },{ 00:22:50.378 "params": { 00:22:50.378 "name": "Nvme3", 00:22:50.378 "trtype": "tcp", 00:22:50.378 "traddr": "10.0.0.2", 00:22:50.378 "adrfam": "ipv4", 00:22:50.378 "trsvcid": "4420", 00:22:50.378 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:50.378 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:50.378 "hdgst": false, 00:22:50.378 "ddgst": false 00:22:50.378 }, 00:22:50.378 "method": "bdev_nvme_attach_controller" 00:22:50.378 },{ 00:22:50.378 "params": { 00:22:50.378 "name": "Nvme4", 00:22:50.378 "trtype": "tcp", 00:22:50.378 "traddr": "10.0.0.2", 00:22:50.378 "adrfam": "ipv4", 00:22:50.378 "trsvcid": "4420", 00:22:50.378 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:50.378 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:50.378 "hdgst": false, 00:22:50.378 "ddgst": false 00:22:50.378 }, 00:22:50.379 "method": "bdev_nvme_attach_controller" 00:22:50.379 },{ 00:22:50.379 "params": { 00:22:50.379 "name": "Nvme5", 00:22:50.379 "trtype": "tcp", 00:22:50.379 "traddr": "10.0.0.2", 00:22:50.379 "adrfam": "ipv4", 00:22:50.379 "trsvcid": "4420", 00:22:50.379 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:50.379 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:50.379 "hdgst": false, 00:22:50.379 "ddgst": false 00:22:50.379 }, 00:22:50.379 "method": "bdev_nvme_attach_controller" 00:22:50.379 },{ 00:22:50.379 "params": { 00:22:50.379 "name": "Nvme6", 00:22:50.379 "trtype": "tcp", 00:22:50.379 "traddr": "10.0.0.2", 00:22:50.379 "adrfam": "ipv4", 00:22:50.379 "trsvcid": "4420", 00:22:50.379 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:50.379 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:50.379 "hdgst": false, 00:22:50.379 "ddgst": false 00:22:50.379 }, 00:22:50.379 "method": "bdev_nvme_attach_controller" 00:22:50.379 },{ 00:22:50.379 "params": { 00:22:50.379 "name": "Nvme7", 00:22:50.379 "trtype": "tcp", 00:22:50.379 "traddr": "10.0.0.2", 00:22:50.379 "adrfam": "ipv4", 00:22:50.379 "trsvcid": "4420", 00:22:50.379 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:50.379 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:50.379 "hdgst": false, 00:22:50.379 "ddgst": false 00:22:50.379 }, 00:22:50.379 "method": "bdev_nvme_attach_controller" 00:22:50.379 },{ 00:22:50.379 "params": { 00:22:50.379 "name": "Nvme8", 00:22:50.379 "trtype": "tcp", 00:22:50.379 "traddr": "10.0.0.2", 00:22:50.379 "adrfam": "ipv4", 00:22:50.379 "trsvcid": "4420", 00:22:50.379 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:50.379 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:50.379 "hdgst": false, 00:22:50.379 "ddgst": false 00:22:50.379 }, 00:22:50.379 "method": "bdev_nvme_attach_controller" 00:22:50.379 },{ 00:22:50.379 "params": { 00:22:50.379 "name": "Nvme9", 00:22:50.379 "trtype": "tcp", 00:22:50.379 "traddr": "10.0.0.2", 00:22:50.379 "adrfam": "ipv4", 00:22:50.379 "trsvcid": "4420", 00:22:50.379 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:50.379 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:50.379 "hdgst": false, 00:22:50.379 "ddgst": false 00:22:50.379 }, 00:22:50.379 "method": "bdev_nvme_attach_controller" 00:22:50.379 },{ 00:22:50.379 "params": { 00:22:50.379 "name": "Nvme10", 00:22:50.379 "trtype": "tcp", 00:22:50.379 "traddr": "10.0.0.2", 00:22:50.379 "adrfam": "ipv4", 00:22:50.379 "trsvcid": "4420", 00:22:50.379 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:50.379 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:50.379 "hdgst": false, 00:22:50.379 "ddgst": false 00:22:50.379 }, 00:22:50.379 "method": "bdev_nvme_attach_controller" 00:22:50.379 }' 00:22:50.379 [2024-11-06 14:04:36.488927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.379 [2024-11-06 14:04:36.524934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.292 Running I/O for 10 seconds... 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2476231 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 2476231 ']' 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 2476231 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:52.863 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2476231 00:22:52.863 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:52.863 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:52.863 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2476231' 00:22:52.863 killing process with pid 2476231 00:22:52.863 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 2476231 00:22:52.863 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 2476231 00:22:52.863 Received shutdown signal, test time was about 0.890700 seconds 00:22:52.863 00:22:52.863 Latency(us) 00:22:52.863 [2024-11-06T13:04:39.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.863 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.863 Verification LBA range: start 0x0 length 0x400 00:22:52.863 Nvme1n1 : 0.89 288.74 18.05 0.00 0.00 218686.29 20316.16 244667.73 00:22:52.863 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.863 Verification LBA range: start 0x0 length 0x400 00:22:52.863 Nvme2n1 : 0.89 287.70 17.98 0.00 0.00 214828.80 21299.20 267386.88 00:22:52.863 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.863 Verification LBA range: start 0x0 length 0x400 00:22:52.863 Nvme3n1 : 0.86 229.71 14.36 0.00 0.00 259927.93 3986.77 241172.48 00:22:52.863 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.863 Verification LBA range: start 0x0 length 0x400 00:22:52.863 Nvme4n1 : 0.86 222.70 13.92 0.00 0.00 264198.54 17367.04 239424.85 00:22:52.863 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.863 Verification LBA range: start 0x0 length 0x400 00:22:52.863 Nvme5n1 : 0.87 221.39 13.84 0.00 0.00 259623.25 32112.64 222822.40 00:22:52.863 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.863 Verification LBA range: start 0x0 length 0x400 00:22:52.863 Nvme6n1 : 0.87 220.44 13.78 0.00 0.00 254210.28 16165.55 241172.48 00:22:52.863 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.863 Verification LBA range: start 0x0 length 0x400 00:22:52.863 Nvme7n1 : 0.85 225.19 14.07 0.00 0.00 241979.45 15728.64 242920.11 00:22:52.863 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.863 Verification LBA range: start 0x0 length 0x400 00:22:52.863 Nvme8n1 : 0.88 219.32 13.71 0.00 0.00 243013.40 15182.51 249910.61 00:22:52.863 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.863 Verification LBA range: start 0x0 length 0x400 00:22:52.863 Nvme9n1 : 0.88 290.91 18.18 0.00 0.00 178484.27 17148.59 221074.77 00:22:52.863 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.863 Verification LBA range: start 0x0 length 0x400 00:22:52.863 Nvme10n1 : 0.88 217.15 13.57 0.00 0.00 233173.90 16165.55 263891.63 00:22:52.863 [2024-11-06T13:04:39.143Z] =================================================================================================================== 00:22:52.863 [2024-11-06T13:04:39.143Z] Total : 2423.23 151.45 0.00 0.00 233891.27 3986.77 267386.88 00:22:53.124 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2475847 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.064 rmmod nvme_tcp 00:22:54.064 rmmod nvme_fabrics 00:22:54.064 rmmod nvme_keyring 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2475847 ']' 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2475847 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 2475847 ']' 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 2475847 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:54.064 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2475847 00:22:54.324 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:54.324 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:54.324 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2475847' 00:22:54.324 killing process with pid 2475847 00:22:54.324 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 2475847 00:22:54.324 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 2475847 00:22:54.584 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:54.584 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:54.584 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:54.584 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:54.584 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:54.584 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:54.584 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:54.584 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:54.584 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:54.584 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.584 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.584 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.497 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:56.497 00:22:56.497 real 0m8.177s 00:22:56.497 user 0m25.030s 00:22:56.497 sys 0m1.354s 00:22:56.497 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:56.497 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:56.497 ************************************ 00:22:56.497 END TEST nvmf_shutdown_tc2 00:22:56.497 ************************************ 00:22:56.497 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:56.497 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:56.497 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:56.497 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:56.758 ************************************ 00:22:56.758 START TEST nvmf_shutdown_tc3 00:22:56.758 ************************************ 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.758 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:56.759 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:56.759 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:56.759 Found net devices under 0000:31:00.0: cvl_0_0 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:56.759 Found net devices under 0000:31:00.1: cvl_0_1 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:56.759 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:56.760 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:57.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:22:57.020 00:22:57.020 --- 10.0.0.2 ping statistics --- 00:22:57.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.020 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:22:57.020 00:22:57.020 --- 10.0.0.1 ping statistics --- 00:22:57.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.020 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2477686 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2477686 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 2477686 ']' 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:57.020 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.020 [2024-11-06 14:04:43.221635] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:22:57.020 [2024-11-06 14:04:43.221690] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.280 [2024-11-06 14:04:43.317714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.280 [2024-11-06 14:04:43.357371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.280 [2024-11-06 14:04:43.357408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.280 [2024-11-06 14:04:43.357413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.280 [2024-11-06 14:04:43.357419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.280 [2024-11-06 14:04:43.357423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.280 [2024-11-06 14:04:43.358981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.280 [2024-11-06 14:04:43.359139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.280 [2024-11-06 14:04:43.359266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.280 [2024-11-06 14:04:43.359268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.850 [2024-11-06 14:04:44.073277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.850 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:58.110 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.110 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:58.110 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.111 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:58.111 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:58.111 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.111 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.111 Malloc1 00:22:58.111 [2024-11-06 14:04:44.182716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.111 Malloc2 00:22:58.111 Malloc3 00:22:58.111 Malloc4 00:22:58.111 Malloc5 00:22:58.111 Malloc6 00:22:58.371 Malloc7 00:22:58.371 Malloc8 00:22:58.371 Malloc9 00:22:58.371 Malloc10 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2478005 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2478005 /var/tmp/bdevperf.sock 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 2478005 ']' 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.371 { 00:22:58.371 "params": { 00:22:58.371 "name": "Nvme$subsystem", 00:22:58.371 "trtype": "$TEST_TRANSPORT", 00:22:58.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.371 "adrfam": "ipv4", 00:22:58.371 "trsvcid": "$NVMF_PORT", 00:22:58.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.371 "hdgst": ${hdgst:-false}, 00:22:58.371 "ddgst": ${ddgst:-false} 00:22:58.371 }, 00:22:58.371 "method": "bdev_nvme_attach_controller" 00:22:58.371 } 00:22:58.371 EOF 00:22:58.371 )") 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.371 { 00:22:58.371 "params": { 00:22:58.371 "name": "Nvme$subsystem", 00:22:58.371 "trtype": "$TEST_TRANSPORT", 00:22:58.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.371 "adrfam": "ipv4", 00:22:58.371 "trsvcid": "$NVMF_PORT", 00:22:58.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.371 "hdgst": ${hdgst:-false}, 00:22:58.371 "ddgst": ${ddgst:-false} 00:22:58.371 }, 00:22:58.371 "method": "bdev_nvme_attach_controller" 00:22:58.371 } 00:22:58.371 EOF 00:22:58.371 )") 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.371 { 00:22:58.371 "params": { 00:22:58.371 "name": "Nvme$subsystem", 00:22:58.371 "trtype": "$TEST_TRANSPORT", 00:22:58.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.371 "adrfam": "ipv4", 00:22:58.371 "trsvcid": "$NVMF_PORT", 00:22:58.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.371 "hdgst": ${hdgst:-false}, 00:22:58.371 "ddgst": ${ddgst:-false} 00:22:58.371 }, 00:22:58.371 "method": "bdev_nvme_attach_controller" 00:22:58.371 } 00:22:58.371 EOF 00:22:58.371 )") 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.371 { 00:22:58.371 "params": { 00:22:58.371 "name": "Nvme$subsystem", 00:22:58.371 "trtype": "$TEST_TRANSPORT", 00:22:58.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.371 "adrfam": "ipv4", 00:22:58.371 "trsvcid": "$NVMF_PORT", 00:22:58.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.371 "hdgst": ${hdgst:-false}, 00:22:58.371 "ddgst": ${ddgst:-false} 00:22:58.371 }, 00:22:58.371 "method": "bdev_nvme_attach_controller" 00:22:58.371 } 00:22:58.371 EOF 00:22:58.371 )") 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.371 { 00:22:58.371 "params": { 00:22:58.371 "name": "Nvme$subsystem", 00:22:58.371 "trtype": "$TEST_TRANSPORT", 00:22:58.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.371 "adrfam": "ipv4", 00:22:58.371 "trsvcid": "$NVMF_PORT", 00:22:58.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.371 "hdgst": ${hdgst:-false}, 00:22:58.371 "ddgst": ${ddgst:-false} 00:22:58.371 }, 00:22:58.371 "method": "bdev_nvme_attach_controller" 00:22:58.371 } 00:22:58.371 EOF 00:22:58.371 )") 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.371 { 00:22:58.371 "params": { 00:22:58.371 "name": "Nvme$subsystem", 00:22:58.371 "trtype": "$TEST_TRANSPORT", 00:22:58.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.371 "adrfam": "ipv4", 00:22:58.371 "trsvcid": "$NVMF_PORT", 00:22:58.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.371 "hdgst": ${hdgst:-false}, 00:22:58.371 "ddgst": ${ddgst:-false} 00:22:58.371 }, 00:22:58.371 "method": "bdev_nvme_attach_controller" 00:22:58.371 } 00:22:58.371 EOF 00:22:58.371 )") 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.371 [2024-11-06 14:04:44.624722] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:22:58.371 [2024-11-06 14:04:44.624781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478005 ] 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.371 { 00:22:58.371 "params": { 00:22:58.371 "name": "Nvme$subsystem", 00:22:58.371 "trtype": "$TEST_TRANSPORT", 00:22:58.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.371 "adrfam": "ipv4", 00:22:58.371 "trsvcid": "$NVMF_PORT", 00:22:58.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.371 "hdgst": ${hdgst:-false}, 00:22:58.371 "ddgst": ${ddgst:-false} 00:22:58.371 }, 00:22:58.371 "method": "bdev_nvme_attach_controller" 00:22:58.371 } 00:22:58.371 EOF 00:22:58.371 )") 00:22:58.371 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.372 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.372 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.372 { 00:22:58.372 "params": { 00:22:58.372 "name": "Nvme$subsystem", 00:22:58.372 "trtype": "$TEST_TRANSPORT", 00:22:58.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.372 "adrfam": "ipv4", 00:22:58.372 "trsvcid": "$NVMF_PORT", 00:22:58.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.372 "hdgst": ${hdgst:-false}, 00:22:58.372 "ddgst": ${ddgst:-false} 00:22:58.372 }, 00:22:58.372 "method": "bdev_nvme_attach_controller" 00:22:58.372 } 00:22:58.372 EOF 00:22:58.372 )") 00:22:58.372 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.372 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.372 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.372 { 00:22:58.372 "params": { 00:22:58.372 "name": "Nvme$subsystem", 00:22:58.372 "trtype": "$TEST_TRANSPORT", 00:22:58.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.372 "adrfam": "ipv4", 00:22:58.372 "trsvcid": "$NVMF_PORT", 00:22:58.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.372 "hdgst": ${hdgst:-false}, 00:22:58.372 "ddgst": ${ddgst:-false} 00:22:58.372 }, 00:22:58.372 "method": "bdev_nvme_attach_controller" 00:22:58.372 } 00:22:58.372 EOF 00:22:58.372 )") 00:22:58.372 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.632 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.632 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.632 { 00:22:58.632 "params": { 00:22:58.632 "name": "Nvme$subsystem", 00:22:58.632 "trtype": "$TEST_TRANSPORT", 00:22:58.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.632 "adrfam": "ipv4", 00:22:58.632 "trsvcid": "$NVMF_PORT", 00:22:58.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.632 "hdgst": ${hdgst:-false}, 00:22:58.632 "ddgst": ${ddgst:-false} 00:22:58.632 }, 00:22:58.632 "method": "bdev_nvme_attach_controller" 00:22:58.632 } 00:22:58.632 EOF 00:22:58.632 )") 00:22:58.632 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.632 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:58.632 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:58.632 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:58.632 "params": { 00:22:58.632 "name": "Nvme1", 00:22:58.632 "trtype": "tcp", 00:22:58.632 "traddr": "10.0.0.2", 00:22:58.632 "adrfam": "ipv4", 00:22:58.632 "trsvcid": "4420", 00:22:58.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.632 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:58.632 "hdgst": false, 00:22:58.632 "ddgst": false 00:22:58.632 }, 00:22:58.632 "method": "bdev_nvme_attach_controller" 00:22:58.632 },{ 00:22:58.632 "params": { 00:22:58.632 "name": "Nvme2", 00:22:58.632 "trtype": "tcp", 00:22:58.632 "traddr": "10.0.0.2", 00:22:58.632 "adrfam": "ipv4", 00:22:58.632 "trsvcid": "4420", 00:22:58.632 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:58.632 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:58.632 "hdgst": false, 00:22:58.632 "ddgst": false 00:22:58.632 }, 00:22:58.632 "method": "bdev_nvme_attach_controller" 00:22:58.632 },{ 00:22:58.632 "params": { 00:22:58.632 "name": "Nvme3", 00:22:58.632 "trtype": "tcp", 00:22:58.632 "traddr": "10.0.0.2", 00:22:58.632 "adrfam": "ipv4", 00:22:58.632 "trsvcid": "4420", 00:22:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:58.633 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:58.633 "hdgst": false, 00:22:58.633 "ddgst": false 00:22:58.633 }, 00:22:58.633 "method": "bdev_nvme_attach_controller" 00:22:58.633 },{ 00:22:58.633 "params": { 00:22:58.633 "name": "Nvme4", 00:22:58.633 "trtype": "tcp", 00:22:58.633 "traddr": "10.0.0.2", 00:22:58.633 "adrfam": "ipv4", 00:22:58.633 "trsvcid": "4420", 00:22:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:58.633 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:58.633 "hdgst": false, 00:22:58.633 "ddgst": false 00:22:58.633 }, 00:22:58.633 "method": "bdev_nvme_attach_controller" 00:22:58.633 },{ 00:22:58.633 "params": { 00:22:58.633 "name": "Nvme5", 00:22:58.633 "trtype": "tcp", 00:22:58.633 "traddr": "10.0.0.2", 00:22:58.633 "adrfam": "ipv4", 00:22:58.633 "trsvcid": "4420", 00:22:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:58.633 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:58.633 "hdgst": false, 00:22:58.633 "ddgst": false 00:22:58.633 }, 00:22:58.633 "method": "bdev_nvme_attach_controller" 00:22:58.633 },{ 00:22:58.633 "params": { 00:22:58.633 "name": "Nvme6", 00:22:58.633 "trtype": "tcp", 00:22:58.633 "traddr": "10.0.0.2", 00:22:58.633 "adrfam": "ipv4", 00:22:58.633 "trsvcid": "4420", 00:22:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:58.633 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:58.633 "hdgst": false, 00:22:58.633 "ddgst": false 00:22:58.633 }, 00:22:58.633 "method": "bdev_nvme_attach_controller" 00:22:58.633 },{ 00:22:58.633 "params": { 00:22:58.633 "name": "Nvme7", 00:22:58.633 "trtype": "tcp", 00:22:58.633 "traddr": "10.0.0.2", 00:22:58.633 "adrfam": "ipv4", 00:22:58.633 "trsvcid": "4420", 00:22:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:58.633 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:58.633 "hdgst": false, 00:22:58.633 "ddgst": false 00:22:58.633 }, 00:22:58.633 "method": "bdev_nvme_attach_controller" 00:22:58.633 },{ 00:22:58.633 "params": { 00:22:58.633 "name": "Nvme8", 00:22:58.633 "trtype": "tcp", 00:22:58.633 "traddr": "10.0.0.2", 00:22:58.633 "adrfam": "ipv4", 00:22:58.633 "trsvcid": "4420", 00:22:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:58.633 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:58.633 "hdgst": false, 00:22:58.633 "ddgst": false 00:22:58.633 }, 00:22:58.633 "method": "bdev_nvme_attach_controller" 00:22:58.633 },{ 00:22:58.633 "params": { 00:22:58.633 "name": "Nvme9", 00:22:58.633 "trtype": "tcp", 00:22:58.633 "traddr": "10.0.0.2", 00:22:58.633 "adrfam": "ipv4", 00:22:58.633 "trsvcid": "4420", 00:22:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:58.633 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:58.633 "hdgst": false, 00:22:58.633 "ddgst": false 00:22:58.633 }, 00:22:58.633 "method": "bdev_nvme_attach_controller" 00:22:58.633 },{ 00:22:58.633 "params": { 00:22:58.633 "name": "Nvme10", 00:22:58.633 "trtype": "tcp", 00:22:58.633 "traddr": "10.0.0.2", 00:22:58.633 "adrfam": "ipv4", 00:22:58.633 "trsvcid": "4420", 00:22:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:58.633 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:58.633 "hdgst": false, 00:22:58.633 "ddgst": false 00:22:58.633 }, 00:22:58.633 "method": "bdev_nvme_attach_controller" 00:22:58.633 }' 00:22:58.633 [2024-11-06 14:04:44.715347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.633 [2024-11-06 14:04:44.751585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.544 Running I/O for 10 seconds... 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2477686 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 2477686 ']' 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 2477686 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2477686 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2477686' 00:23:01.129 killing process with pid 2477686 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 2477686 00:23:01.129 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 2477686 00:23:01.129 [2024-11-06 14:04:47.307073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.129 [2024-11-06 14:04:47.307150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.129 [2024-11-06 14:04:47.307157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.307452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9fa0 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.309401] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.130 [2024-11-06 14:04:47.316248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.130 [2024-11-06 14:04:47.316393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.316568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba470 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.131 [2024-11-06 14:04:47.318882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.318886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.318891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.318898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.318903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.318907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.318912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.318916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.318921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.318926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.318931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.318935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.318941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.318946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.318950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.318955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.318959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.318964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bae30 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.319996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.320000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.320005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.320010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.320014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.320019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.320023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb300 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.320847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.320859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.320864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.320869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.132 [2024-11-06 14:04:47.320877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.320997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.321158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb7d0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.133 [2024-11-06 14:04:47.322248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.322430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bbca0 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.134 [2024-11-06 14:04:47.323262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23f90 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.323995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.324107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.135 [2024-11-06 14:04:47.327651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.135 [2024-11-06 14:04:47.327677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.135 [2024-11-06 14:04:47.327687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.135 [2024-11-06 14:04:47.327695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.135 [2024-11-06 14:04:47.327704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.135 [2024-11-06 14:04:47.327712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.327720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.327728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.327735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa53d30 is same with the state(6) to be set 00:23:01.136 [2024-11-06 14:04:47.327772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.327783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.327795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.327803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.327812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.327819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.327827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.327835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.327842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd630 is same with the state(6) to be set 00:23:01.136 [2024-11-06 14:04:47.327867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.327876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.327885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.327892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.327901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.327909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.327918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.327925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.327932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeae230 is same with the state(6) to be set 00:23:01.136 [2024-11-06 14:04:47.327960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.327969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.327978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.327985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.327993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa47fd0 is same with the state(6) to be set 00:23:01.136 [2024-11-06 14:04:47.328047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa47dd0 is same with the state(6) to be set 00:23:01.136 [2024-11-06 14:04:47.328135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec11c0 is same with the state(6) to be set 00:23:01.136 [2024-11-06 14:04:47.328238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a610 is same with the state(6) to be set 00:23:01.136 [2024-11-06 14:04:47.328331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa541b0 is same with the state(6) to be set 00:23:01.136 [2024-11-06 14:04:47.328422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.136 [2024-11-06 14:04:47.328479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.136 [2024-11-06 14:04:47.328486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70b10 is same with the state(6) to be set 00:23:01.136 [2024-11-06 14:04:47.333938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.136 [2024-11-06 14:04:47.333963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.136 [2024-11-06 14:04:47.333973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.136 [2024-11-06 14:04:47.333980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.136 [2024-11-06 14:04:47.333986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.136 [2024-11-06 14:04:47.333992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.137 [2024-11-06 14:04:47.333998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.137 [2024-11-06 14:04:47.334008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.137 [2024-11-06 14:04:47.334015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.137 [2024-11-06 14:04:47.334021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.137 [2024-11-06 14:04:47.334027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.137 [2024-11-06 14:04:47.334033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.137 [2024-11-06 14:04:47.334039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.137 [2024-11-06 14:04:47.334045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24480 is same with the state(6) to be set 00:23:01.137 [2024-11-06 14:04:47.352592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.352981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.352989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.353000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.353009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.353020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.353028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.353043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.353051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.353061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.353070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.353080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.353087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.353097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.353106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.353115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.353123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.353133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.353140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.353151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.353158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.353168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.353178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.353188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.353196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.353206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.353213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.353224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.353232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.353241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.353249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.353259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.353269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.137 [2024-11-06 14:04:47.353279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.137 [2024-11-06 14:04:47.353287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.138 [2024-11-06 14:04:47.353793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.138 [2024-11-06 14:04:47.353917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa53d30 (9): Bad file descriptor 00:23:01.138 [2024-11-06 14:04:47.353942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecd630 (9): Bad file descriptor 00:23:01.138 [2024-11-06 14:04:47.353962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeae230 (9): Bad file descriptor 00:23:01.138 [2024-11-06 14:04:47.353975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa47fd0 (9): Bad file descriptor 00:23:01.138 [2024-11-06 14:04:47.353992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa47dd0 (9): Bad file descriptor 00:23:01.138 [2024-11-06 14:04:47.354011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec11c0 (9): Bad file descriptor 00:23:01.138 [2024-11-06 14:04:47.354028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96a610 (9): Bad file descriptor 00:23:01.138 [2024-11-06 14:04:47.354047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa541b0 (9): Bad file descriptor 00:23:01.138 [2024-11-06 14:04:47.354064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe70b10 (9): Bad file descriptor 00:23:01.138 [2024-11-06 14:04:47.354096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.139 [2024-11-06 14:04:47.354106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.139 [2024-11-06 14:04:47.354124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.139 [2024-11-06 14:04:47.354140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.139 [2024-11-06 14:04:47.354157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe84810 is same with the state(6) to be set 00:23:01.139 [2024-11-06 14:04:47.354262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.139 [2024-11-06 14:04:47.354891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.139 [2024-11-06 14:04:47.354902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.354909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.354919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.354927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.354940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.354948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.354958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.354966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.354976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.354983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.354994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.355418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.355427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe47420 is same with the state(6) to be set 00:23:01.140 [2024-11-06 14:04:47.358359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:01.140 [2024-11-06 14:04:47.358388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:01.140 [2024-11-06 14:04:47.358466] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.140 [2024-11-06 14:04:47.358517] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.140 [2024-11-06 14:04:47.358556] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.140 [2024-11-06 14:04:47.358593] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.140 [2024-11-06 14:04:47.358630] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.140 [2024-11-06 14:04:47.359353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.140 [2024-11-06 14:04:47.359373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeae230 with addr=10.0.0.2, port=4420 00:23:01.140 [2024-11-06 14:04:47.359382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeae230 is same with the state(6) to be set 00:23:01.140 [2024-11-06 14:04:47.359696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.140 [2024-11-06 14:04:47.359707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa53d30 with addr=10.0.0.2, port=4420 00:23:01.140 [2024-11-06 14:04:47.359715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa53d30 is same with the state(6) to be set 00:23:01.140 [2024-11-06 14:04:47.360075] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.140 [2024-11-06 14:04:47.360103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeae230 (9): Bad file descriptor 00:23:01.140 [2024-11-06 14:04:47.360116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa53d30 (9): Bad file descriptor 00:23:01.140 [2024-11-06 14:04:47.360172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.360183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.360198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.360206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.360216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.360224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.140 [2024-11-06 14:04:47.360234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.140 [2024-11-06 14:04:47.360242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.141 [2024-11-06 14:04:47.360953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.141 [2024-11-06 14:04:47.360962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.360970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.360980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.360988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.360999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.361331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.361339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b60 is same with the state(6) to be set 00:23:01.142 [2024-11-06 14:04:47.361423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:01.142 [2024-11-06 14:04:47.361434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:01.142 [2024-11-06 14:04:47.361443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:01.142 [2024-11-06 14:04:47.361452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:01.142 [2024-11-06 14:04:47.361463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:01.142 [2024-11-06 14:04:47.361473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:01.142 [2024-11-06 14:04:47.361481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:01.142 [2024-11-06 14:04:47.361487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:01.142 [2024-11-06 14:04:47.362738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:01.142 [2024-11-06 14:04:47.363150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.142 [2024-11-06 14:04:47.363167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec11c0 with addr=10.0.0.2, port=4420 00:23:01.142 [2024-11-06 14:04:47.363178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec11c0 is same with the state(6) to be set 00:23:01.142 [2024-11-06 14:04:47.363486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec11c0 (9): Bad file descriptor 00:23:01.142 [2024-11-06 14:04:47.363535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:01.142 [2024-11-06 14:04:47.363543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:01.142 [2024-11-06 14:04:47.363551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:01.142 [2024-11-06 14:04:47.363558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:01.142 [2024-11-06 14:04:47.363947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe84810 (9): Bad file descriptor 00:23:01.142 [2024-11-06 14:04:47.364043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.364054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.364066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.364075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.364085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.364093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.364103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.364111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.364121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.364129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.364139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.364147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.364156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.364165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.364178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.364186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.364196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.364204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.364215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.364223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.142 [2024-11-06 14:04:47.364233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.142 [2024-11-06 14:04:47.364240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.364863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.143 [2024-11-06 14:04:47.364873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.143 [2024-11-06 14:04:47.373440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.373508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.373530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.373548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.373567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.373586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.373603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.373622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.373640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.373658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.373675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.373693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.373717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.373736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.373762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.373781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.373799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.373816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.373835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.373844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58110 is same with the state(6) to be set 00:23:01.144 [2024-11-06 14:04:47.375205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.144 [2024-11-06 14:04:47.375598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.144 [2024-11-06 14:04:47.375607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.375987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.375995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.376015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.376034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.376053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.376072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.376090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.376109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.376127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.376146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.376164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.376182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.376200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.376218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.376236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.376255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.376273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.376291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.376309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.145 [2024-11-06 14:04:47.376327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.145 [2024-11-06 14:04:47.376336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.376344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.376354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.376361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.376372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.376380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.376389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.376397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.376406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc593d0 is same with the state(6) to be set 00:23:01.146 [2024-11-06 14:04:47.377682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.377695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.377710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.377719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.377731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.377740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.377759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.377770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.377781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.377791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.377801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.377809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.377819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.377828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.377838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.377846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.377855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.377863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.377874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.377882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.377892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.377900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.377910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.377919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.377929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.377937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.377947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.377955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.377966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.377974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.377984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.377993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.146 [2024-11-06 14:04:47.378353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.146 [2024-11-06 14:04:47.378361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.378882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.378890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe545f0 is same with the state(6) to be set 00:23:01.147 [2024-11-06 14:04:47.380165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.380180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.380194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.380206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.380218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.380228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.380239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.380249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.380259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.380267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.147 [2024-11-06 14:04:47.380277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.147 [2024-11-06 14:04:47.380285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.380988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.380998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.381007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.148 [2024-11-06 14:04:47.381017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.148 [2024-11-06 14:04:47.381025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.381043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.381062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.381081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.381100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.381119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.381139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.381158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.381176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.381195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.381213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.381230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.381248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.381266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.381283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.381301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.381319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.381337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.381355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.381365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55b20 is same with the state(6) to be set 00:23:01.149 [2024-11-06 14:04:47.382635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.382649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.382662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.382672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.382684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.382694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.382706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.382716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.382728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.382738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.382756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.382766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.382777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.382785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.382795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.382803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.382813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.382821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.382831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.382840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.382850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.382858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.382868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.382876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.382889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.382897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.382908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.382916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.382926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.382934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.382945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.382952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.382963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.382971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.382981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.382989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.383000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.383008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.383018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.149 [2024-11-06 14:04:47.383026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.149 [2024-11-06 14:04:47.383036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.150 [2024-11-06 14:04:47.383766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.150 [2024-11-06 14:04:47.383776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.383783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.383793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.383801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.383812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.383821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.383831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.383839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.383848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe57070 is same with the state(6) to be set 00:23:01.151 [2024-11-06 14:04:47.385133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.151 [2024-11-06 14:04:47.385737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.151 [2024-11-06 14:04:47.385748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.385758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.385766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.385776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.385783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.385793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.385802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.385812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.385820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.385830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.385837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.385848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.385855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.385865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.385873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.385883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.385897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.385907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.385915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.385925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.385933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.385943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.385951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.385961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.385968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.385978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.385986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.385996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.386004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.386014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.386022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.386033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.390058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.390103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.390114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.390125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.390133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.390144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.390152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.390163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.390171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.390181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.390189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.390200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.390208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.390218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.390226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.390238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.390246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.390257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.390266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.390276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.390284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.390294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.390303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.390313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.390326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.390336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.390345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.390355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.390363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.390373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.152 [2024-11-06 14:04:47.390381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.152 [2024-11-06 14:04:47.390390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe585c0 is same with the state(6) to be set 00:23:01.152 [2024-11-06 14:04:47.391709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:01.152 [2024-11-06 14:04:47.391733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:01.152 [2024-11-06 14:04:47.391750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:01.152 [2024-11-06 14:04:47.391764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:01.152 [2024-11-06 14:04:47.391858] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:01.152 [2024-11-06 14:04:47.391878] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:01.152 [2024-11-06 14:04:47.391964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:01.152 [2024-11-06 14:04:47.391978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:01.152 [2024-11-06 14:04:47.392421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.152 [2024-11-06 14:04:47.392440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa541b0 with addr=10.0.0.2, port=4420 00:23:01.153 [2024-11-06 14:04:47.392450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa541b0 is same with the state(6) to be set 00:23:01.153 [2024-11-06 14:04:47.392651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.153 [2024-11-06 14:04:47.392663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa47fd0 with addr=10.0.0.2, port=4420 00:23:01.153 [2024-11-06 14:04:47.392671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa47fd0 is same with the state(6) to be set 00:23:01.416 [2024-11-06 14:04:47.393086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.416 [2024-11-06 14:04:47.393126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe70b10 with addr=10.0.0.2, port=4420 00:23:01.416 [2024-11-06 14:04:47.393139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70b10 is same with the state(6) to be set 00:23:01.416 [2024-11-06 14:04:47.393341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.416 [2024-11-06 14:04:47.393354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa47dd0 with addr=10.0.0.2, port=4420 00:23:01.416 [2024-11-06 14:04:47.393362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa47dd0 is same with the state(6) to be set 00:23:01.416 [2024-11-06 14:04:47.395016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.416 [2024-11-06 14:04:47.395676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.416 [2024-11-06 14:04:47.395684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.395694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.395703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.395713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.395723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.395733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.395742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.395757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.395765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.395775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.395783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.395793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.395801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.395811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.395821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.395832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.395840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.395851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.395860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.395870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.395879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.395889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.395897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.395907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.395915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.395925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.395934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.395945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.395953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.395966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.395975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.395985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.395994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.396004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.396013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.396023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.396032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.396042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.396051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.396064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.396074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.396084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.396093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.396104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.396113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.396123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.396132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.396142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.396150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.396160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.396169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.396178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.396187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.396197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.396207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.396216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.417 [2024-11-06 14:04:47.396225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.417 [2024-11-06 14:04:47.396234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b0b0 is same with the state(6) to be set 00:23:01.417 [2024-11-06 14:04:47.397797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:01.417 [2024-11-06 14:04:47.397822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:01.417 [2024-11-06 14:04:47.397833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:01.417 task offset: 16384 on job bdev=Nvme10n1 fails 00:23:01.417 00:23:01.417 Latency(us) 00:23:01.417 [2024-11-06T13:04:47.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.417 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.417 Job: Nvme1n1 ended in about 0.83 seconds with error 00:23:01.417 Verification LBA range: start 0x0 length 0x400 00:23:01.417 Nvme1n1 : 0.83 154.84 9.68 77.42 0.00 272058.31 23156.05 255153.49 00:23:01.417 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.417 Job: Nvme2n1 ended in about 0.83 seconds with error 00:23:01.417 Verification LBA range: start 0x0 length 0x400 00:23:01.417 Nvme2n1 : 0.83 154.37 9.65 77.18 0.00 266419.48 20753.07 255153.49 00:23:01.417 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.417 Job: Nvme3n1 ended in about 0.81 seconds with error 00:23:01.417 Verification LBA range: start 0x0 length 0x400 00:23:01.417 Nvme3n1 : 0.81 237.12 14.82 79.04 0.00 190060.32 5024.43 253405.87 00:23:01.417 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.417 Job: Nvme4n1 ended in about 0.83 seconds with error 00:23:01.417 Verification LBA range: start 0x0 length 0x400 00:23:01.417 Nvme4n1 : 0.83 158.72 9.92 76.95 0.00 249156.34 21736.11 220200.96 00:23:01.417 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.417 Job: Nvme5n1 ended in about 0.83 seconds with error 00:23:01.417 Verification LBA range: start 0x0 length 0x400 00:23:01.417 Nvme5n1 : 0.83 153.45 9.59 76.73 0.00 248696.60 21189.97 249910.61 00:23:01.417 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.417 Job: Nvme6n1 ended in about 0.84 seconds with error 00:23:01.417 Verification LBA range: start 0x0 length 0x400 00:23:01.417 Nvme6n1 : 0.84 157.78 9.86 76.50 0.00 238142.55 10540.37 249910.61 00:23:01.417 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.417 Job: Nvme7n1 ended in about 0.84 seconds with error 00:23:01.417 Verification LBA range: start 0x0 length 0x400 00:23:01.417 Nvme7n1 : 0.84 151.81 9.49 75.91 0.00 238759.54 22937.60 246415.36 00:23:01.417 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.417 Job: Nvme8n1 ended in about 0.81 seconds with error 00:23:01.417 Verification LBA range: start 0x0 length 0x400 00:23:01.417 Nvme8n1 : 0.81 157.18 9.82 78.59 0.00 222869.62 13653.33 258648.75 00:23:01.418 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.418 Job: Nvme9n1 ended in about 0.85 seconds with error 00:23:01.418 Verification LBA range: start 0x0 length 0x400 00:23:01.418 Nvme9n1 : 0.85 150.77 9.42 75.38 0.00 228118.19 23374.51 221948.59 00:23:01.418 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.418 Job: Nvme10n1 ended in about 0.81 seconds with error 00:23:01.418 Verification LBA range: start 0x0 length 0x400 00:23:01.418 Nvme10n1 : 0.81 158.31 9.89 79.16 0.00 208364.66 19988.48 277872.64 00:23:01.418 [2024-11-06T13:04:47.698Z] =================================================================================================================== 00:23:01.418 [2024-11-06T13:04:47.698Z] Total : 1634.35 102.15 772.86 0.00 234809.75 5024.43 277872.64 00:23:01.418 [2024-11-06 14:04:47.421601] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:01.418 [2024-11-06 14:04:47.421632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:01.418 [2024-11-06 14:04:47.422057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.418 [2024-11-06 14:04:47.422077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecd630 with addr=10.0.0.2, port=4420 00:23:01.418 [2024-11-06 14:04:47.422087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd630 is same with the state(6) to be set 00:23:01.418 [2024-11-06 14:04:47.422411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.418 [2024-11-06 14:04:47.422422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96a610 with addr=10.0.0.2, port=4420 00:23:01.418 [2024-11-06 14:04:47.422430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a610 is same with the state(6) to be set 00:23:01.418 [2024-11-06 14:04:47.422444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa541b0 (9): Bad file descriptor 00:23:01.418 [2024-11-06 14:04:47.422458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa47fd0 (9): Bad file descriptor 00:23:01.418 [2024-11-06 14:04:47.422467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe70b10 (9): Bad file descriptor 00:23:01.418 [2024-11-06 14:04:47.422478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa47dd0 (9): Bad file descriptor 00:23:01.418 [2024-11-06 14:04:47.422811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.418 [2024-11-06 14:04:47.422827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa53d30 with addr=10.0.0.2, port=4420 00:23:01.418 [2024-11-06 14:04:47.422835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa53d30 is same with the state(6) to be set 00:23:01.418 [2024-11-06 14:04:47.423064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.418 [2024-11-06 14:04:47.423075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeae230 with addr=10.0.0.2, port=4420 00:23:01.418 [2024-11-06 14:04:47.423083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeae230 is same with the state(6) to be set 00:23:01.418 [2024-11-06 14:04:47.423421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.418 [2024-11-06 14:04:47.423432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec11c0 with addr=10.0.0.2, port=4420 00:23:01.418 [2024-11-06 14:04:47.423440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec11c0 is same with the state(6) to be set 00:23:01.418 [2024-11-06 14:04:47.423753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.418 [2024-11-06 14:04:47.423765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe84810 with addr=10.0.0.2, port=4420 00:23:01.418 [2024-11-06 14:04:47.423774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe84810 is same with the state(6) to be set 00:23:01.418 [2024-11-06 14:04:47.423783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecd630 (9): Bad file descriptor 00:23:01.418 [2024-11-06 14:04:47.423793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96a610 (9): Bad file descriptor 00:23:01.418 [2024-11-06 14:04:47.423802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:01.418 [2024-11-06 14:04:47.423810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:01.418 [2024-11-06 14:04:47.423824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:01.418 [2024-11-06 14:04:47.423835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:01.418 [2024-11-06 14:04:47.423845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:01.418 [2024-11-06 14:04:47.423852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:01.418 [2024-11-06 14:04:47.423859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:01.418 [2024-11-06 14:04:47.423865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:01.418 [2024-11-06 14:04:47.423874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:01.418 [2024-11-06 14:04:47.423881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:01.418 [2024-11-06 14:04:47.423888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:01.418 [2024-11-06 14:04:47.423895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:01.418 [2024-11-06 14:04:47.423903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:01.418 [2024-11-06 14:04:47.423910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:01.418 [2024-11-06 14:04:47.423917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:01.418 [2024-11-06 14:04:47.423923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:01.418 [2024-11-06 14:04:47.423975] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:01.418 [2024-11-06 14:04:47.423989] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:01.418 [2024-11-06 14:04:47.424358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa53d30 (9): Bad file descriptor 00:23:01.418 [2024-11-06 14:04:47.424372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeae230 (9): Bad file descriptor 00:23:01.418 [2024-11-06 14:04:47.424383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec11c0 (9): Bad file descriptor 00:23:01.418 [2024-11-06 14:04:47.424393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe84810 (9): Bad file descriptor 00:23:01.418 [2024-11-06 14:04:47.424401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:01.418 [2024-11-06 14:04:47.424409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:01.418 [2024-11-06 14:04:47.424416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:01.418 [2024-11-06 14:04:47.424423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:01.418 [2024-11-06 14:04:47.424431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:01.418 [2024-11-06 14:04:47.424438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:01.418 [2024-11-06 14:04:47.424445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:01.418 [2024-11-06 14:04:47.424452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:01.418 [2024-11-06 14:04:47.424493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:01.418 [2024-11-06 14:04:47.424504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:01.418 [2024-11-06 14:04:47.424513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:01.418 [2024-11-06 14:04:47.424523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:01.418 [2024-11-06 14:04:47.424557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:01.418 [2024-11-06 14:04:47.424564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:01.418 [2024-11-06 14:04:47.424572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:01.418 [2024-11-06 14:04:47.424580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:01.418 [2024-11-06 14:04:47.424588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:01.418 [2024-11-06 14:04:47.424595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:01.418 [2024-11-06 14:04:47.424603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:01.418 [2024-11-06 14:04:47.424610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:01.418 [2024-11-06 14:04:47.424619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:01.418 [2024-11-06 14:04:47.424625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:01.418 [2024-11-06 14:04:47.424633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:01.418 [2024-11-06 14:04:47.424639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:01.418 [2024-11-06 14:04:47.424648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:01.418 [2024-11-06 14:04:47.424655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:01.418 [2024-11-06 14:04:47.424662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:01.418 [2024-11-06 14:04:47.424670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:01.418 [2024-11-06 14:04:47.425028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.418 [2024-11-06 14:04:47.425044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa47dd0 with addr=10.0.0.2, port=4420 00:23:01.418 [2024-11-06 14:04:47.425053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa47dd0 is same with the state(6) to be set 00:23:01.418 [2024-11-06 14:04:47.425248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.418 [2024-11-06 14:04:47.425260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe70b10 with addr=10.0.0.2, port=4420 00:23:01.418 [2024-11-06 14:04:47.425268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70b10 is same with the state(6) to be set 00:23:01.419 [2024-11-06 14:04:47.425572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.419 [2024-11-06 14:04:47.425583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa47fd0 with addr=10.0.0.2, port=4420 00:23:01.419 [2024-11-06 14:04:47.425591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa47fd0 is same with the state(6) to be set 00:23:01.419 [2024-11-06 14:04:47.425904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.419 [2024-11-06 14:04:47.425919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa541b0 with addr=10.0.0.2, port=4420 00:23:01.419 [2024-11-06 14:04:47.425927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa541b0 is same with the state(6) to be set 00:23:01.419 [2024-11-06 14:04:47.425957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa47dd0 (9): Bad file descriptor 00:23:01.419 [2024-11-06 14:04:47.425968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe70b10 (9): Bad file descriptor 00:23:01.419 [2024-11-06 14:04:47.425977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa47fd0 (9): Bad file descriptor 00:23:01.419 [2024-11-06 14:04:47.425987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa541b0 (9): Bad file descriptor 00:23:01.419 [2024-11-06 14:04:47.426014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:01.419 [2024-11-06 14:04:47.426022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:01.419 [2024-11-06 14:04:47.426030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:01.419 [2024-11-06 14:04:47.426038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:01.419 [2024-11-06 14:04:47.426045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:01.419 [2024-11-06 14:04:47.426052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:01.419 [2024-11-06 14:04:47.426060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:01.419 [2024-11-06 14:04:47.426067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:01.419 [2024-11-06 14:04:47.426074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:01.419 [2024-11-06 14:04:47.426081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:01.419 [2024-11-06 14:04:47.426088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:01.419 [2024-11-06 14:04:47.426095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:01.419 [2024-11-06 14:04:47.426102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:01.419 [2024-11-06 14:04:47.426109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:01.419 [2024-11-06 14:04:47.426115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:01.419 [2024-11-06 14:04:47.426122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:01.419 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2478005 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2478005 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 2478005 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:02.361 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:02.361 rmmod nvme_tcp 00:23:02.361 rmmod nvme_fabrics 00:23:02.621 rmmod nvme_keyring 00:23:02.621 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:02.621 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:02.621 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:02.621 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2477686 ']' 00:23:02.621 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2477686 00:23:02.622 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 2477686 ']' 00:23:02.622 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 2477686 00:23:02.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2477686) - No such process 00:23:02.622 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 2477686 is not found' 00:23:02.622 Process with pid 2477686 is not found 00:23:02.622 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:02.622 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:02.622 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:02.622 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:02.622 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:02.622 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:02.622 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:02.622 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:02.622 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:02.622 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.622 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.622 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.534 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:04.534 00:23:04.534 real 0m7.984s 00:23:04.534 user 0m20.014s 00:23:04.534 sys 0m1.265s 00:23:04.534 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:04.534 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:04.534 ************************************ 00:23:04.534 END TEST nvmf_shutdown_tc3 00:23:04.534 ************************************ 00:23:04.534 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:04.534 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:04.534 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:04.534 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:04.534 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:04.534 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:04.795 ************************************ 00:23:04.795 START TEST nvmf_shutdown_tc4 00:23:04.795 ************************************ 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:04.795 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:04.796 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:04.796 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:04.796 Found net devices under 0000:31:00.0: cvl_0_0 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:04.796 Found net devices under 0000:31:00.1: cvl_0_1 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.796 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.796 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.796 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.796 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:04.796 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:05.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:23:05.058 00:23:05.058 --- 10.0.0.2 ping statistics --- 00:23:05.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.058 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:05.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:23:05.058 00:23:05.058 --- 10.0.0.1 ping statistics --- 00:23:05.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.058 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2479226 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2479226 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 2479226 ']' 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:05.058 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.058 [2024-11-06 14:04:51.289138] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:23:05.058 [2024-11-06 14:04:51.289200] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.318 [2024-11-06 14:04:51.386003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:05.318 [2024-11-06 14:04:51.426062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.318 [2024-11-06 14:04:51.426100] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.318 [2024-11-06 14:04:51.426109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.318 [2024-11-06 14:04:51.426114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.318 [2024-11-06 14:04:51.426119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.318 [2024-11-06 14:04:51.427957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.319 [2024-11-06 14:04:51.428114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:05.319 [2024-11-06 14:04:51.428272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.319 [2024-11-06 14:04:51.428274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.891 [2024-11-06 14:04:52.126993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.891 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.151 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.151 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.151 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.151 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.151 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.151 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.151 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.151 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.151 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:06.151 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.151 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:06.151 Malloc1 00:23:06.151 [2024-11-06 14:04:52.236695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.151 Malloc2 00:23:06.151 Malloc3 00:23:06.151 Malloc4 00:23:06.151 Malloc5 00:23:06.151 Malloc6 00:23:06.412 Malloc7 00:23:06.412 Malloc8 00:23:06.412 Malloc9 00:23:06.412 Malloc10 00:23:06.412 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.412 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:06.412 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:06.412 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:06.412 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2479584 00:23:06.412 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:06.412 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:06.672 [2024-11-06 14:04:52.722732] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:11.967 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:11.967 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2479226 00:23:11.967 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 2479226 ']' 00:23:11.967 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 2479226 00:23:11.967 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:23:11.967 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:11.967 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2479226 00:23:11.967 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:11.967 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:11.967 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2479226' 00:23:11.967 killing process with pid 2479226 00:23:11.967 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 2479226 00:23:11.967 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 2479226 00:23:11.967 [2024-11-06 14:04:57.716512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd54260 is same with the state(6) to be set 00:23:11.968 [2024-11-06 14:04:57.716558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd54260 is same with the state(6) to be set 00:23:11.968 [2024-11-06 14:04:57.716565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd54260 is same with the state(6) to be set 00:23:11.968 [2024-11-06 14:04:57.716570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd54260 is same with the state(6) to be set 00:23:11.968 [2024-11-06 14:04:57.716575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd54260 is same with the state(6) to be set 00:23:11.968 [2024-11-06 14:04:57.716623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd533f0 is same with the state(6) to be set 00:23:11.968 [2024-11-06 14:04:57.716652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd533f0 is same with the state(6) to be set 00:23:11.968 [2024-11-06 14:04:57.716658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd533f0 is same with the state(6) to be set 00:23:11.968 [2024-11-06 14:04:57.716663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd533f0 is same with the state(6) to be set 00:23:11.968 [2024-11-06 14:04:57.716668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd533f0 is same with the state(6) to be set 00:23:11.968 [2024-11-06 14:04:57.716673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd533f0 is same with the state(6) to be set 00:23:11.968 [2024-11-06 14:04:57.716678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd533f0 is same with the state(6) to be set 00:23:11.968 [2024-11-06 14:04:57.716682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd533f0 is same with the state(6) to be set 00:23:11.968 [2024-11-06 14:04:57.716687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd533f0 is same with the state(6) to be set 00:23:11.968 [2024-11-06 14:04:57.716692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd533f0 is same with the state(6) to be set 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 [2024-11-06 14:04:57.717376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 [2024-11-06 14:04:57.718572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.968 starting I/O failed: -6 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.968 Write completed with error (sct=0, sc=8) 00:23:11.968 starting I/O failed: -6 00:23:11.969 [2024-11-06 14:04:57.719614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd526d0 is same with the state(6) to be set 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 [2024-11-06 14:04:57.719638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd526d0 is same with the state(6) to be set 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 [2024-11-06 14:04:57.719644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd526d0 is same with the state(6) to be set 00:23:11.969 [2024-11-06 14:04:57.719649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd526d0 is same with the state(6) to be set 00:23:11.969 [2024-11-06 14:04:57.719654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd526d0 is same with the state(6) to be set 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 [2024-11-06 14:04:57.719686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 [2024-11-06 14:04:57.719923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52ba0 is same with the state(6) to be set 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 [2024-11-06 14:04:57.719949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52ba0 is same with the state(6) to be set 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 [2024-11-06 14:04:57.719957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52ba0 is same with the state(6) to be set 00:23:11.969 starting I/O failed: -6 00:23:11.969 [2024-11-06 14:04:57.719965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52ba0 is same with the state(6) to be set 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 [2024-11-06 14:04:57.719974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52ba0 is same with tstarting I/O failed: -6 00:23:11.969 he state(6) to be set 00:23:11.969 [2024-11-06 14:04:57.719982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52ba0 is same with tWrite completed with error (sct=0, sc=8) 00:23:11.969 he state(6) to be set 00:23:11.969 [2024-11-06 14:04:57.719990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52ba0 is same with tstarting I/O failed: -6 00:23:11.969 he state(6) to be set 00:23:11.969 [2024-11-06 14:04:57.719998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52ba0 is same with tWrite completed with error (sct=0, sc=8) 00:23:11.969 he state(6) to be set 00:23:11.969 starting I/O failed: -6 00:23:11.969 [2024-11-06 14:04:57.720011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52ba0 is same with the state(6) to be set 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 [2024-11-06 14:04:57.720283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd53070 is same with the state(6) to be set 00:23:11.969 starting I/O failed: -6 00:23:11.969 [2024-11-06 14:04:57.720298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd53070 is same with the state(6) to be set 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 [2024-11-06 14:04:57.720304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd53070 is same with the state(6) to be set 00:23:11.969 starting I/O failed: -6 00:23:11.969 [2024-11-06 14:04:57.720309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd53070 is same with the state(6) to be set 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 [2024-11-06 14:04:57.720315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd53070 is same with the state(6) to be set 00:23:11.969 [2024-11-06 14:04:57.720320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd53070 is same with the state(6) to be set 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 [2024-11-06 14:04:57.721144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.969 NVMe io qpair process completion error 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 Write completed with error (sct=0, sc=8) 00:23:11.969 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 [2024-11-06 14:04:57.722350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 [2024-11-06 14:04:57.723152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 [2024-11-06 14:04:57.724053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.970 starting I/O failed: -6 00:23:11.970 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 [2024-11-06 14:04:57.726029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.971 NVMe io qpair process completion error 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 [2024-11-06 14:04:57.727125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 [2024-11-06 14:04:57.728035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.971 Write completed with error (sct=0, sc=8) 00:23:11.971 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 [2024-11-06 14:04:57.728941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 [2024-11-06 14:04:57.730348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.972 NVMe io qpair process completion error 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.972 starting I/O failed: -6 00:23:11.972 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 [2024-11-06 14:04:57.731455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 [2024-11-06 14:04:57.732303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 [2024-11-06 14:04:57.733209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.973 starting I/O failed: -6 00:23:11.973 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 [2024-11-06 14:04:57.735165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.974 NVMe io qpair process completion error 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 [2024-11-06 14:04:57.736377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 [2024-11-06 14:04:57.737190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.974 Write completed with error (sct=0, sc=8) 00:23:11.974 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 [2024-11-06 14:04:57.738566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 [2024-11-06 14:04:57.739991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.975 NVMe io qpair process completion error 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.975 starting I/O failed: -6 00:23:11.975 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 [2024-11-06 14:04:57.741112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 [2024-11-06 14:04:57.741926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 [2024-11-06 14:04:57.742855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.976 Write completed with error (sct=0, sc=8) 00:23:11.976 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 [2024-11-06 14:04:57.744464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.977 NVMe io qpair process completion error 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 [2024-11-06 14:04:57.745613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.977 Write completed with error (sct=0, sc=8) 00:23:11.977 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 [2024-11-06 14:04:57.746531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 [2024-11-06 14:04:57.747442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 [2024-11-06 14:04:57.750204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.978 NVMe io qpair process completion error 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.978 starting I/O failed: -6 00:23:11.978 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 [2024-11-06 14:04:57.751465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 [2024-11-06 14:04:57.752310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 [2024-11-06 14:04:57.753238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.979 Write completed with error (sct=0, sc=8) 00:23:11.979 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 [2024-11-06 14:04:57.755003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.980 NVMe io qpair process completion error 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 [2024-11-06 14:04:57.756372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 starting I/O failed: -6 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.980 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 [2024-11-06 14:04:57.757185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 [2024-11-06 14:04:57.758118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 [2024-11-06 14:04:57.760585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.981 NVMe io qpair process completion error 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 starting I/O failed: -6 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.981 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 [2024-11-06 14:04:57.761880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 [2024-11-06 14:04:57.762692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 starting I/O failed: -6 00:23:11.982 Write completed with error (sct=0, sc=8) 00:23:11.982 [2024-11-06 14:04:57.763625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.982 starting I/O failed: -6 00:23:11.982 starting I/O failed: -6 00:23:11.982 starting I/O failed: -6 00:23:11.982 starting I/O failed: -6 00:23:11.982 starting I/O failed: -6 00:23:11.982 starting I/O failed: -6 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 Write completed with error (sct=0, sc=8) 00:23:11.983 starting I/O failed: -6 00:23:11.983 [2024-11-06 14:04:57.765893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.983 NVMe io qpair process completion error 00:23:11.983 Initializing NVMe Controllers 00:23:11.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:11.983 Controller IO queue size 128, less than required. 00:23:11.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:11.983 Controller IO queue size 128, less than required. 00:23:11.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:11.983 Controller IO queue size 128, less than required. 00:23:11.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:11.983 Controller IO queue size 128, less than required. 00:23:11.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:11.983 Controller IO queue size 128, less than required. 00:23:11.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:11.983 Controller IO queue size 128, less than required. 00:23:11.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:11.983 Controller IO queue size 128, less than required. 00:23:11.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:11.983 Controller IO queue size 128, less than required. 00:23:11.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:11.983 Controller IO queue size 128, less than required. 00:23:11.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:11.983 Controller IO queue size 128, less than required. 00:23:11.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:11.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:11.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:11.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:11.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:11.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:11.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:11.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:11.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:11.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:11.983 Initialization complete. Launching workers. 00:23:11.983 ======================================================== 00:23:11.983 Latency(us) 00:23:11.983 Device Information : IOPS MiB/s Average min max 00:23:11.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1819.99 78.20 70346.44 917.98 118564.20 00:23:11.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1890.08 81.21 67755.19 879.91 118735.30 00:23:11.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1928.61 82.87 66430.34 690.73 146502.07 00:23:11.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1908.61 82.01 67146.47 820.29 117214.85 00:23:11.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1859.56 79.90 68939.31 915.41 116317.78 00:23:11.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1938.92 83.31 66156.46 704.56 118396.67 00:23:11.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1910.08 82.07 67179.17 844.31 118223.99 00:23:11.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1925.45 82.73 66679.97 651.25 132300.41 00:23:11.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1927.13 82.81 65962.61 662.14 118029.72 00:23:11.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1914.92 82.28 66402.13 818.80 118213.09 00:23:11.983 ======================================================== 00:23:11.983 Total : 19023.36 817.41 67276.05 651.25 146502.07 00:23:11.983 00:23:11.983 [2024-11-06 14:04:57.768653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166e9f0 is same with the state(6) to be set 00:23:11.983 [2024-11-06 14:04:57.768698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f050 is same with the state(6) to be set 00:23:11.983 [2024-11-06 14:04:57.768731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f380 is same with the state(6) to be set 00:23:11.984 [2024-11-06 14:04:57.768774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670540 is same with the state(6) to be set 00:23:11.984 [2024-11-06 14:04:57.768804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166e6c0 is same with the state(6) to be set 00:23:11.984 [2024-11-06 14:04:57.768834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f6b0 is same with the state(6) to be set 00:23:11.984 [2024-11-06 14:04:57.768863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f9e0 is same with the state(6) to be set 00:23:11.984 [2024-11-06 14:04:57.768901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166e390 is same with the state(6) to be set 00:23:11.984 [2024-11-06 14:04:57.768930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670360 is same with the state(6) to be set 00:23:11.984 [2024-11-06 14:04:57.768959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166e060 is same with the state(6) to be set 00:23:11.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:11.984 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:12.926 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2479584 00:23:12.926 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:23:12.926 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2479584 00:23:12.926 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:12.926 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.926 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:23:12.926 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.926 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 2479584 00:23:12.926 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:23:12.926 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.926 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.926 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.926 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:12.926 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:12.926 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:12.926 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:12.926 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:12.927 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:12.927 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:12.927 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:12.927 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:12.927 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:12.927 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:12.927 rmmod nvme_tcp 00:23:12.927 rmmod nvme_fabrics 00:23:12.927 rmmod nvme_keyring 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2479226 ']' 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2479226 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 2479226 ']' 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 2479226 00:23:12.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2479226) - No such process 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 2479226 is not found' 00:23:12.927 Process with pid 2479226 is not found 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.927 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:15.473 00:23:15.473 real 0m10.280s 00:23:15.473 user 0m27.891s 00:23:15.473 sys 0m4.009s 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:15.473 ************************************ 00:23:15.473 END TEST nvmf_shutdown_tc4 00:23:15.473 ************************************ 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:15.473 00:23:15.473 real 0m43.723s 00:23:15.473 user 1m45.601s 00:23:15.473 sys 0m14.038s 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:15.473 ************************************ 00:23:15.473 END TEST nvmf_shutdown 00:23:15.473 ************************************ 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:15.473 ************************************ 00:23:15.473 START TEST nvmf_nsid 00:23:15.473 ************************************ 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:15.473 * Looking for test storage... 00:23:15.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:15.473 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:15.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.474 --rc genhtml_branch_coverage=1 00:23:15.474 --rc genhtml_function_coverage=1 00:23:15.474 --rc genhtml_legend=1 00:23:15.474 --rc geninfo_all_blocks=1 00:23:15.474 --rc geninfo_unexecuted_blocks=1 00:23:15.474 00:23:15.474 ' 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:15.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.474 --rc genhtml_branch_coverage=1 00:23:15.474 --rc genhtml_function_coverage=1 00:23:15.474 --rc genhtml_legend=1 00:23:15.474 --rc geninfo_all_blocks=1 00:23:15.474 --rc geninfo_unexecuted_blocks=1 00:23:15.474 00:23:15.474 ' 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:15.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.474 --rc genhtml_branch_coverage=1 00:23:15.474 --rc genhtml_function_coverage=1 00:23:15.474 --rc genhtml_legend=1 00:23:15.474 --rc geninfo_all_blocks=1 00:23:15.474 --rc geninfo_unexecuted_blocks=1 00:23:15.474 00:23:15.474 ' 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:15.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.474 --rc genhtml_branch_coverage=1 00:23:15.474 --rc genhtml_function_coverage=1 00:23:15.474 --rc genhtml_legend=1 00:23:15.474 --rc geninfo_all_blocks=1 00:23:15.474 --rc geninfo_unexecuted_blocks=1 00:23:15.474 00:23:15.474 ' 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:15.474 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:23.666 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:23.666 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:23.666 Found net devices under 0000:31:00.0: cvl_0_0 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:23.666 Found net devices under 0000:31:00.1: cvl_0_1 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:23.666 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.666 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.666 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.666 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:23.666 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:23.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:23:23.666 00:23:23.666 --- 10.0.0.2 ping statistics --- 00:23:23.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.666 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:23:23.666 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:23:23.666 00:23:23.666 --- 10.0.0.1 ping statistics --- 00:23:23.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.666 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:23:23.666 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.666 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:23.666 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:23.666 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2484983 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2484983 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 2484983 ']' 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:23.667 [2024-11-06 14:05:09.194553] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:23:23.667 [2024-11-06 14:05:09.194618] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.667 [2024-11-06 14:05:09.269024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.667 [2024-11-06 14:05:09.314727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.667 [2024-11-06 14:05:09.314794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.667 [2024-11-06 14:05:09.314802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.667 [2024-11-06 14:05:09.314807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.667 [2024-11-06 14:05:09.314812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.667 [2024-11-06 14:05:09.315536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2485073 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=ffcd943f-e18f-43d1-9780-99914926eb14 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=9526b9c6-1ec8-4afd-8676-964f4f5ff45e 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=8b064029-2941-4317-b07d-323967db30b0 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:23.667 null0 00:23:23.667 null1 00:23:23.667 null2 00:23:23.667 [2024-11-06 14:05:09.537691] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:23:23.667 [2024-11-06 14:05:09.537775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485073 ] 00:23:23.667 [2024-11-06 14:05:09.539138] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.667 [2024-11-06 14:05:09.563435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2485073 /var/tmp/tgt2.sock 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 2485073 ']' 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:23.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:23.667 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:23.667 [2024-11-06 14:05:09.631278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.667 [2024-11-06 14:05:09.685233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.933 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:23.933 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:23.933 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:24.193 [2024-11-06 14:05:10.249127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.194 [2024-11-06 14:05:10.265310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:24.194 nvme0n1 nvme0n2 00:23:24.194 nvme1n1 00:23:24.194 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:24.194 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:24.194 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:25.579 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:25.579 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:25.579 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:25.579 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:25.579 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:23:25.579 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:25.579 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:25.579 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:25.579 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:25.579 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:25.579 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:23:25.579 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:23:25.579 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:23:26.520 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:26.520 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:26.520 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:26.520 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:23:26.520 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:26.520 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid ffcd943f-e18f-43d1-9780-99914926eb14 00:23:26.520 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:26.520 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:26.520 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ffcd943fe18f43d1978099914926eb14 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FFCD943FE18F43D1978099914926EB14 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ FFCD943FE18F43D1978099914926EB14 == \F\F\C\D\9\4\3\F\E\1\8\F\4\3\D\1\9\7\8\0\9\9\9\1\4\9\2\6\E\B\1\4 ]] 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 9526b9c6-1ec8-4afd-8676-964f4f5ff45e 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9526b9c61ec84afd8676964f4f5ff45e 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9526B9C61EC84AFD8676964F4F5FF45E 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 9526B9C61EC84AFD8676964F4F5FF45E == \9\5\2\6\B\9\C\6\1\E\C\8\4\A\F\D\8\6\7\6\9\6\4\F\4\F\5\F\F\4\5\E ]] 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 8b064029-2941-4317-b07d-323967db30b0 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:26.781 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:26.781 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8b06402929414317b07d323967db30b0 00:23:26.781 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8B06402929414317B07D323967DB30B0 00:23:26.781 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 8B06402929414317B07D323967DB30B0 == \8\B\0\6\4\0\2\9\2\9\4\1\4\3\1\7\B\0\7\D\3\2\3\9\6\7\D\B\3\0\B\0 ]] 00:23:26.781 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:27.042 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:27.042 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:27.042 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2485073 00:23:27.042 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 2485073 ']' 00:23:27.042 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 2485073 00:23:27.042 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:27.042 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:27.042 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2485073 00:23:27.042 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:27.042 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:27.042 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2485073' 00:23:27.042 killing process with pid 2485073 00:23:27.042 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 2485073 00:23:27.042 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 2485073 00:23:27.302 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:27.302 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:27.302 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:27.302 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:27.302 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:27.302 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:27.302 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:27.302 rmmod nvme_tcp 00:23:27.302 rmmod nvme_fabrics 00:23:27.302 rmmod nvme_keyring 00:23:27.302 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:27.302 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:27.302 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:27.302 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2484983 ']' 00:23:27.302 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2484983 00:23:27.302 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 2484983 ']' 00:23:27.302 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 2484983 00:23:27.302 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:27.302 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:27.302 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2484983 00:23:27.563 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:27.563 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:27.563 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2484983' 00:23:27.563 killing process with pid 2484983 00:23:27.563 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 2484983 00:23:27.563 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 2484983 00:23:27.563 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:27.563 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:27.563 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:27.563 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:27.563 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:27.563 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:27.563 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:27.563 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:27.563 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:27.563 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.563 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.563 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.107 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:30.107 00:23:30.107 real 0m14.533s 00:23:30.107 user 0m10.852s 00:23:30.107 sys 0m6.903s 00:23:30.107 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:30.107 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:30.107 ************************************ 00:23:30.107 END TEST nvmf_nsid 00:23:30.107 ************************************ 00:23:30.107 14:05:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:30.107 00:23:30.107 real 13m9.956s 00:23:30.107 user 27m29.003s 00:23:30.107 sys 3m56.252s 00:23:30.107 14:05:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:30.107 14:05:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:30.107 ************************************ 00:23:30.107 END TEST nvmf_target_extra 00:23:30.107 ************************************ 00:23:30.107 14:05:15 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:30.107 14:05:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:30.107 14:05:15 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:30.107 14:05:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:30.107 ************************************ 00:23:30.107 START TEST nvmf_host 00:23:30.107 ************************************ 00:23:30.107 14:05:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:30.107 * Looking for test storage... 00:23:30.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:30.107 14:05:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:30.107 14:05:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:30.107 14:05:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:30.107 14:05:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:30.107 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.107 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.107 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.107 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.107 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.107 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.107 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.107 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.107 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.107 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.107 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.107 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:30.107 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:30.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.108 --rc genhtml_branch_coverage=1 00:23:30.108 --rc genhtml_function_coverage=1 00:23:30.108 --rc genhtml_legend=1 00:23:30.108 --rc geninfo_all_blocks=1 00:23:30.108 --rc geninfo_unexecuted_blocks=1 00:23:30.108 00:23:30.108 ' 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:30.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.108 --rc genhtml_branch_coverage=1 00:23:30.108 --rc genhtml_function_coverage=1 00:23:30.108 --rc genhtml_legend=1 00:23:30.108 --rc geninfo_all_blocks=1 00:23:30.108 --rc geninfo_unexecuted_blocks=1 00:23:30.108 00:23:30.108 ' 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:30.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.108 --rc genhtml_branch_coverage=1 00:23:30.108 --rc genhtml_function_coverage=1 00:23:30.108 --rc genhtml_legend=1 00:23:30.108 --rc geninfo_all_blocks=1 00:23:30.108 --rc geninfo_unexecuted_blocks=1 00:23:30.108 00:23:30.108 ' 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:30.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.108 --rc genhtml_branch_coverage=1 00:23:30.108 --rc genhtml_function_coverage=1 00:23:30.108 --rc genhtml_legend=1 00:23:30.108 --rc geninfo_all_blocks=1 00:23:30.108 --rc geninfo_unexecuted_blocks=1 00:23:30.108 00:23:30.108 ' 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:30.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.108 ************************************ 00:23:30.108 START TEST nvmf_multicontroller 00:23:30.108 ************************************ 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:30.108 * Looking for test storage... 00:23:30.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:23:30.108 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:30.370 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:30.370 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.370 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.370 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.370 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.370 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.370 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:30.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.371 --rc genhtml_branch_coverage=1 00:23:30.371 --rc genhtml_function_coverage=1 00:23:30.371 --rc genhtml_legend=1 00:23:30.371 --rc geninfo_all_blocks=1 00:23:30.371 --rc geninfo_unexecuted_blocks=1 00:23:30.371 00:23:30.371 ' 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:30.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.371 --rc genhtml_branch_coverage=1 00:23:30.371 --rc genhtml_function_coverage=1 00:23:30.371 --rc genhtml_legend=1 00:23:30.371 --rc geninfo_all_blocks=1 00:23:30.371 --rc geninfo_unexecuted_blocks=1 00:23:30.371 00:23:30.371 ' 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:30.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.371 --rc genhtml_branch_coverage=1 00:23:30.371 --rc genhtml_function_coverage=1 00:23:30.371 --rc genhtml_legend=1 00:23:30.371 --rc geninfo_all_blocks=1 00:23:30.371 --rc geninfo_unexecuted_blocks=1 00:23:30.371 00:23:30.371 ' 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:30.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.371 --rc genhtml_branch_coverage=1 00:23:30.371 --rc genhtml_function_coverage=1 00:23:30.371 --rc genhtml_legend=1 00:23:30.371 --rc geninfo_all_blocks=1 00:23:30.371 --rc geninfo_unexecuted_blocks=1 00:23:30.371 00:23:30.371 ' 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:30.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:30.371 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.372 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.372 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.372 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:30.372 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:30.372 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:30.372 14:05:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.508 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:38.509 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:38.509 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:38.509 Found net devices under 0000:31:00.0: cvl_0_0 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:38.509 Found net devices under 0000:31:00.1: cvl_0_1 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:38.509 14:05:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:38.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:23:38.509 00:23:38.509 --- 10.0.0.2 ping statistics --- 00:23:38.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.509 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:23:38.509 00:23:38.509 --- 10.0.0.1 ping statistics --- 00:23:38.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.509 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2490274 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2490274 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 2490274 ']' 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:38.509 14:05:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:38.510 [2024-11-06 14:05:24.194037] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:23:38.510 [2024-11-06 14:05:24.194106] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.510 [2024-11-06 14:05:24.294727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:38.510 [2024-11-06 14:05:24.347081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.510 [2024-11-06 14:05:24.347134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.510 [2024-11-06 14:05:24.347142] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.510 [2024-11-06 14:05:24.347149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.510 [2024-11-06 14:05:24.347156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.510 [2024-11-06 14:05:24.349231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.510 [2024-11-06 14:05:24.349391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.510 [2024-11-06 14:05:24.349391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:38.771 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:38.771 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:38.771 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:38.771 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:38.771 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.032 [2024-11-06 14:05:25.076475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.032 Malloc0 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.032 [2024-11-06 14:05:25.148070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.032 [2024-11-06 14:05:25.159860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.032 Malloc1 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2490495 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2490495 /var/tmp/bdevperf.sock 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 2490495 ']' 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:39.032 14:05:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.977 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:39.977 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:39.977 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:39.977 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.977 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.238 NVMe0n1 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.238 1 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.238 request: 00:23:40.238 { 00:23:40.238 "name": "NVMe0", 00:23:40.238 "trtype": "tcp", 00:23:40.238 "traddr": "10.0.0.2", 00:23:40.238 "adrfam": "ipv4", 00:23:40.238 "trsvcid": "4420", 00:23:40.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.238 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:40.238 "hostaddr": "10.0.0.1", 00:23:40.238 "prchk_reftag": false, 00:23:40.238 "prchk_guard": false, 00:23:40.238 "hdgst": false, 00:23:40.238 "ddgst": false, 00:23:40.238 "allow_unrecognized_csi": false, 00:23:40.238 "method": "bdev_nvme_attach_controller", 00:23:40.238 "req_id": 1 00:23:40.238 } 00:23:40.238 Got JSON-RPC error response 00:23:40.238 response: 00:23:40.238 { 00:23:40.238 "code": -114, 00:23:40.238 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:40.238 } 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:40.238 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.239 request: 00:23:40.239 { 00:23:40.239 "name": "NVMe0", 00:23:40.239 "trtype": "tcp", 00:23:40.239 "traddr": "10.0.0.2", 00:23:40.239 "adrfam": "ipv4", 00:23:40.239 "trsvcid": "4420", 00:23:40.239 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:40.239 "hostaddr": "10.0.0.1", 00:23:40.239 "prchk_reftag": false, 00:23:40.239 "prchk_guard": false, 00:23:40.239 "hdgst": false, 00:23:40.239 "ddgst": false, 00:23:40.239 "allow_unrecognized_csi": false, 00:23:40.239 "method": "bdev_nvme_attach_controller", 00:23:40.239 "req_id": 1 00:23:40.239 } 00:23:40.239 Got JSON-RPC error response 00:23:40.239 response: 00:23:40.239 { 00:23:40.239 "code": -114, 00:23:40.239 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:40.239 } 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.239 request: 00:23:40.239 { 00:23:40.239 "name": "NVMe0", 00:23:40.239 "trtype": "tcp", 00:23:40.239 "traddr": "10.0.0.2", 00:23:40.239 "adrfam": "ipv4", 00:23:40.239 "trsvcid": "4420", 00:23:40.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.239 "hostaddr": "10.0.0.1", 00:23:40.239 "prchk_reftag": false, 00:23:40.239 "prchk_guard": false, 00:23:40.239 "hdgst": false, 00:23:40.239 "ddgst": false, 00:23:40.239 "multipath": "disable", 00:23:40.239 "allow_unrecognized_csi": false, 00:23:40.239 "method": "bdev_nvme_attach_controller", 00:23:40.239 "req_id": 1 00:23:40.239 } 00:23:40.239 Got JSON-RPC error response 00:23:40.239 response: 00:23:40.239 { 00:23:40.239 "code": -114, 00:23:40.239 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:40.239 } 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.239 request: 00:23:40.239 { 00:23:40.239 "name": "NVMe0", 00:23:40.239 "trtype": "tcp", 00:23:40.239 "traddr": "10.0.0.2", 00:23:40.239 "adrfam": "ipv4", 00:23:40.239 "trsvcid": "4420", 00:23:40.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.239 "hostaddr": "10.0.0.1", 00:23:40.239 "prchk_reftag": false, 00:23:40.239 "prchk_guard": false, 00:23:40.239 "hdgst": false, 00:23:40.239 "ddgst": false, 00:23:40.239 "multipath": "failover", 00:23:40.239 "allow_unrecognized_csi": false, 00:23:40.239 "method": "bdev_nvme_attach_controller", 00:23:40.239 "req_id": 1 00:23:40.239 } 00:23:40.239 Got JSON-RPC error response 00:23:40.239 response: 00:23:40.239 { 00:23:40.239 "code": -114, 00:23:40.239 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:40.239 } 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.239 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.501 NVMe0n1 00:23:40.501 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.501 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:40.501 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.501 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.501 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.501 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:40.501 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.501 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.501 00:23:40.501 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.501 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:40.501 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.501 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:40.501 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.501 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.501 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:40.501 14:05:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:41.886 { 00:23:41.886 "results": [ 00:23:41.886 { 00:23:41.886 "job": "NVMe0n1", 00:23:41.886 "core_mask": "0x1", 00:23:41.886 "workload": "write", 00:23:41.886 "status": "finished", 00:23:41.886 "queue_depth": 128, 00:23:41.886 "io_size": 4096, 00:23:41.886 "runtime": 1.006932, 00:23:41.886 "iops": 26860.80092796733, 00:23:41.886 "mibps": 104.92500362487239, 00:23:41.886 "io_failed": 0, 00:23:41.886 "io_timeout": 0, 00:23:41.886 "avg_latency_us": 4754.05361432568, 00:23:41.886 "min_latency_us": 2088.96, 00:23:41.886 "max_latency_us": 16711.68 00:23:41.886 } 00:23:41.886 ], 00:23:41.886 "core_count": 1 00:23:41.886 } 00:23:41.886 14:05:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:41.886 14:05:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.886 14:05:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.886 14:05:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.886 14:05:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:41.886 14:05:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2490495 00:23:41.886 14:05:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 2490495 ']' 00:23:41.886 14:05:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 2490495 00:23:41.886 14:05:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:23:41.886 14:05:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:41.886 14:05:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2490495 00:23:41.886 14:05:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:41.886 14:05:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:41.886 14:05:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2490495' 00:23:41.886 killing process with pid 2490495 00:23:41.886 14:05:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 2490495 00:23:41.886 14:05:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 2490495 00:23:41.886 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:41.886 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.886 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.886 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.886 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:41.886 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.886 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.886 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.886 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:41.886 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:41.886 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:41.886 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:41.886 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:23:41.886 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:23:41.886 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:41.886 [2024-11-06 14:05:25.289434] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:23:41.886 [2024-11-06 14:05:25.289509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490495 ] 00:23:41.886 [2024-11-06 14:05:25.384601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.886 [2024-11-06 14:05:25.437775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.887 [2024-11-06 14:05:26.706117] bdev.c:4753:bdev_name_add: *ERROR*: Bdev name 013449b5-77e2-4908-b167-f88cf3dd8200 already exists 00:23:41.887 [2024-11-06 14:05:26.706164] bdev.c:7962:bdev_register: *ERROR*: Unable to add uuid:013449b5-77e2-4908-b167-f88cf3dd8200 alias for bdev NVMe1n1 00:23:41.887 [2024-11-06 14:05:26.706175] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:41.887 Running I/O for 1 seconds... 00:23:41.887 26838.00 IOPS, 104.84 MiB/s 00:23:41.887 Latency(us) 00:23:41.887 [2024-11-06T13:05:28.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.887 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:41.887 NVMe0n1 : 1.01 26860.80 104.93 0.00 0.00 4754.05 2088.96 16711.68 00:23:41.887 [2024-11-06T13:05:28.167Z] =================================================================================================================== 00:23:41.887 [2024-11-06T13:05:28.167Z] Total : 26860.80 104.93 0.00 0.00 4754.05 2088.96 16711.68 00:23:41.887 Received shutdown signal, test time was about 1.000000 seconds 00:23:41.887 00:23:41.887 Latency(us) 00:23:41.887 [2024-11-06T13:05:28.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.887 [2024-11-06T13:05:28.167Z] =================================================================================================================== 00:23:41.887 [2024-11-06T13:05:28.167Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:41.887 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:41.887 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:41.887 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:41.887 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:41.887 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:41.887 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:41.887 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:41.887 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:41.887 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:41.887 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:41.887 rmmod nvme_tcp 00:23:41.887 rmmod nvme_fabrics 00:23:41.887 rmmod nvme_keyring 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2490274 ']' 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2490274 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 2490274 ']' 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 2490274 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2490274 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2490274' 00:23:42.148 killing process with pid 2490274 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 2490274 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 2490274 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.148 14:05:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:44.695 00:23:44.695 real 0m14.266s 00:23:44.695 user 0m17.526s 00:23:44.695 sys 0m6.571s 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.695 ************************************ 00:23:44.695 END TEST nvmf_multicontroller 00:23:44.695 ************************************ 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.695 ************************************ 00:23:44.695 START TEST nvmf_aer 00:23:44.695 ************************************ 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:44.695 * Looking for test storage... 00:23:44.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:44.695 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:44.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.696 --rc genhtml_branch_coverage=1 00:23:44.696 --rc genhtml_function_coverage=1 00:23:44.696 --rc genhtml_legend=1 00:23:44.696 --rc geninfo_all_blocks=1 00:23:44.696 --rc geninfo_unexecuted_blocks=1 00:23:44.696 00:23:44.696 ' 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:44.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.696 --rc genhtml_branch_coverage=1 00:23:44.696 --rc genhtml_function_coverage=1 00:23:44.696 --rc genhtml_legend=1 00:23:44.696 --rc geninfo_all_blocks=1 00:23:44.696 --rc geninfo_unexecuted_blocks=1 00:23:44.696 00:23:44.696 ' 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:44.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.696 --rc genhtml_branch_coverage=1 00:23:44.696 --rc genhtml_function_coverage=1 00:23:44.696 --rc genhtml_legend=1 00:23:44.696 --rc geninfo_all_blocks=1 00:23:44.696 --rc geninfo_unexecuted_blocks=1 00:23:44.696 00:23:44.696 ' 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:44.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.696 --rc genhtml_branch_coverage=1 00:23:44.696 --rc genhtml_function_coverage=1 00:23:44.696 --rc genhtml_legend=1 00:23:44.696 --rc geninfo_all_blocks=1 00:23:44.696 --rc geninfo_unexecuted_blocks=1 00:23:44.696 00:23:44.696 ' 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:44.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:44.696 14:05:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:52.842 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:52.842 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.842 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:52.843 Found net devices under 0000:31:00.0: cvl_0_0 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:52.843 Found net devices under 0000:31:00.1: cvl_0_1 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:52.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:23:52.843 00:23:52.843 --- 10.0.0.2 ping statistics --- 00:23:52.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.843 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:23:52.843 00:23:52.843 --- 10.0.0.1 ping statistics --- 00:23:52.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.843 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2495268 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2495268 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 2495268 ']' 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:52.843 14:05:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.843 [2024-11-06 14:05:38.513821] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:23:52.843 [2024-11-06 14:05:38.513887] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.843 [2024-11-06 14:05:38.616851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:52.843 [2024-11-06 14:05:38.671302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.843 [2024-11-06 14:05:38.671355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.843 [2024-11-06 14:05:38.671365] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.843 [2024-11-06 14:05:38.671372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.843 [2024-11-06 14:05:38.671379] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.843 [2024-11-06 14:05:38.673515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.843 [2024-11-06 14:05:38.673674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.843 [2024-11-06 14:05:38.673815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:52.843 [2024-11-06 14:05:38.673816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.104 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:53.104 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:23:53.104 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.104 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:53.104 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.366 [2024-11-06 14:05:39.396228] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.366 Malloc0 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.366 [2024-11-06 14:05:39.469582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.366 [ 00:23:53.366 { 00:23:53.366 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:53.366 "subtype": "Discovery", 00:23:53.366 "listen_addresses": [], 00:23:53.366 "allow_any_host": true, 00:23:53.366 "hosts": [] 00:23:53.366 }, 00:23:53.366 { 00:23:53.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.366 "subtype": "NVMe", 00:23:53.366 "listen_addresses": [ 00:23:53.366 { 00:23:53.366 "trtype": "TCP", 00:23:53.366 "adrfam": "IPv4", 00:23:53.366 "traddr": "10.0.0.2", 00:23:53.366 "trsvcid": "4420" 00:23:53.366 } 00:23:53.366 ], 00:23:53.366 "allow_any_host": true, 00:23:53.366 "hosts": [], 00:23:53.366 "serial_number": "SPDK00000000000001", 00:23:53.366 "model_number": "SPDK bdev Controller", 00:23:53.366 "max_namespaces": 2, 00:23:53.366 "min_cntlid": 1, 00:23:53.366 "max_cntlid": 65519, 00:23:53.366 "namespaces": [ 00:23:53.366 { 00:23:53.366 "nsid": 1, 00:23:53.366 "bdev_name": "Malloc0", 00:23:53.366 "name": "Malloc0", 00:23:53.366 "nguid": "D60E5FB6A4DE4C36A4CDCFFB7DD1023F", 00:23:53.366 "uuid": "d60e5fb6-a4de-4c36-a4cd-cffb7dd1023f" 00:23:53.366 } 00:23:53.366 ] 00:23:53.366 } 00:23:53.366 ] 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2495566 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:23:53.366 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.628 Malloc1 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.628 Asynchronous Event Request test 00:23:53.628 Attaching to 10.0.0.2 00:23:53.628 Attached to 10.0.0.2 00:23:53.628 Registering asynchronous event callbacks... 00:23:53.628 Starting namespace attribute notice tests for all controllers... 00:23:53.628 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:53.628 aer_cb - Changed Namespace 00:23:53.628 Cleaning up... 00:23:53.628 [ 00:23:53.628 { 00:23:53.628 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:53.628 "subtype": "Discovery", 00:23:53.628 "listen_addresses": [], 00:23:53.628 "allow_any_host": true, 00:23:53.628 "hosts": [] 00:23:53.628 }, 00:23:53.628 { 00:23:53.628 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.628 "subtype": "NVMe", 00:23:53.628 "listen_addresses": [ 00:23:53.628 { 00:23:53.628 "trtype": "TCP", 00:23:53.628 "adrfam": "IPv4", 00:23:53.628 "traddr": "10.0.0.2", 00:23:53.628 "trsvcid": "4420" 00:23:53.628 } 00:23:53.628 ], 00:23:53.628 "allow_any_host": true, 00:23:53.628 "hosts": [], 00:23:53.628 "serial_number": "SPDK00000000000001", 00:23:53.628 "model_number": "SPDK bdev Controller", 00:23:53.628 "max_namespaces": 2, 00:23:53.628 "min_cntlid": 1, 00:23:53.628 "max_cntlid": 65519, 00:23:53.628 "namespaces": [ 00:23:53.628 { 00:23:53.628 "nsid": 1, 00:23:53.628 "bdev_name": "Malloc0", 00:23:53.628 "name": "Malloc0", 00:23:53.628 "nguid": "D60E5FB6A4DE4C36A4CDCFFB7DD1023F", 00:23:53.628 "uuid": "d60e5fb6-a4de-4c36-a4cd-cffb7dd1023f" 00:23:53.628 }, 00:23:53.628 { 00:23:53.628 "nsid": 2, 00:23:53.628 "bdev_name": "Malloc1", 00:23:53.628 "name": "Malloc1", 00:23:53.628 "nguid": "B8209D385FA74325A274427128B99F81", 00:23:53.628 "uuid": "b8209d38-5fa7-4325-a274-427128b99f81" 00:23:53.628 } 00:23:53.628 ] 00:23:53.628 } 00:23:53.628 ] 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2495566 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:53.628 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:53.628 rmmod nvme_tcp 00:23:53.628 rmmod nvme_fabrics 00:23:53.628 rmmod nvme_keyring 00:23:53.889 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:53.889 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:53.889 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:53.889 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2495268 ']' 00:23:53.889 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2495268 00:23:53.889 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 2495268 ']' 00:23:53.889 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 2495268 00:23:53.889 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:23:53.889 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:53.889 14:05:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2495268 00:23:53.889 14:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:53.889 14:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:53.889 14:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2495268' 00:23:53.889 killing process with pid 2495268 00:23:53.889 14:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 2495268 00:23:53.889 14:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 2495268 00:23:53.889 14:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:53.889 14:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:53.889 14:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:53.889 14:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:54.150 14:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:54.150 14:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:54.150 14:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:54.150 14:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:54.150 14:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:54.150 14:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.150 14:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.150 14:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.062 14:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:56.062 00:23:56.062 real 0m11.698s 00:23:56.062 user 0m8.235s 00:23:56.062 sys 0m6.277s 00:23:56.062 14:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:56.062 14:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.062 ************************************ 00:23:56.062 END TEST nvmf_aer 00:23:56.062 ************************************ 00:23:56.062 14:05:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:56.062 14:05:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:56.062 14:05:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:56.062 14:05:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.323 ************************************ 00:23:56.323 START TEST nvmf_async_init 00:23:56.323 ************************************ 00:23:56.323 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:56.323 * Looking for test storage... 00:23:56.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:56.323 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:56.323 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:23:56.323 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:56.323 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:56.323 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:56.323 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:56.323 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:56.323 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:56.323 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:56.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.324 --rc genhtml_branch_coverage=1 00:23:56.324 --rc genhtml_function_coverage=1 00:23:56.324 --rc genhtml_legend=1 00:23:56.324 --rc geninfo_all_blocks=1 00:23:56.324 --rc geninfo_unexecuted_blocks=1 00:23:56.324 00:23:56.324 ' 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:56.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.324 --rc genhtml_branch_coverage=1 00:23:56.324 --rc genhtml_function_coverage=1 00:23:56.324 --rc genhtml_legend=1 00:23:56.324 --rc geninfo_all_blocks=1 00:23:56.324 --rc geninfo_unexecuted_blocks=1 00:23:56.324 00:23:56.324 ' 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:56.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.324 --rc genhtml_branch_coverage=1 00:23:56.324 --rc genhtml_function_coverage=1 00:23:56.324 --rc genhtml_legend=1 00:23:56.324 --rc geninfo_all_blocks=1 00:23:56.324 --rc geninfo_unexecuted_blocks=1 00:23:56.324 00:23:56.324 ' 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:56.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.324 --rc genhtml_branch_coverage=1 00:23:56.324 --rc genhtml_function_coverage=1 00:23:56.324 --rc genhtml_legend=1 00:23:56.324 --rc geninfo_all_blocks=1 00:23:56.324 --rc geninfo_unexecuted_blocks=1 00:23:56.324 00:23:56.324 ' 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:56.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=68f961ee81274db3b65d468a23bf5a69 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.324 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.585 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:56.585 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:56.585 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:56.585 14:05:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:04.723 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:04.723 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:04.723 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:04.724 Found net devices under 0000:31:00.0: cvl_0_0 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:04.724 Found net devices under 0000:31:00.1: cvl_0_1 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:04.724 14:05:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:04.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:04.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:24:04.724 00:24:04.724 --- 10.0.0.2 ping statistics --- 00:24:04.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.724 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:04.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:04.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:24:04.724 00:24:04.724 --- 10.0.0.1 ping statistics --- 00:24:04.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.724 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2499929 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2499929 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 2499929 ']' 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:04.724 14:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.724 [2024-11-06 14:05:50.279086] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:24:04.724 [2024-11-06 14:05:50.279151] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.724 [2024-11-06 14:05:50.378647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.724 [2024-11-06 14:05:50.429526] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.724 [2024-11-06 14:05:50.429575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.724 [2024-11-06 14:05:50.429584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.724 [2024-11-06 14:05:50.429592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.724 [2024-11-06 14:05:50.429598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.724 [2024-11-06 14:05:50.430389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.985 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:04.985 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:24:04.985 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:04.985 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:04.985 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.986 [2024-11-06 14:05:51.146698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.986 null0 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 68f961ee81274db3b65d468a23bf5a69 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.986 [2024-11-06 14:05:51.207069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.986 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.247 nvme0n1 00:24:05.247 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.247 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:05.247 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.247 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.247 [ 00:24:05.247 { 00:24:05.247 "name": "nvme0n1", 00:24:05.247 "aliases": [ 00:24:05.247 "68f961ee-8127-4db3-b65d-468a23bf5a69" 00:24:05.247 ], 00:24:05.247 "product_name": "NVMe disk", 00:24:05.247 "block_size": 512, 00:24:05.247 "num_blocks": 2097152, 00:24:05.247 "uuid": "68f961ee-8127-4db3-b65d-468a23bf5a69", 00:24:05.247 "numa_id": 0, 00:24:05.247 "assigned_rate_limits": { 00:24:05.247 "rw_ios_per_sec": 0, 00:24:05.247 "rw_mbytes_per_sec": 0, 00:24:05.247 "r_mbytes_per_sec": 0, 00:24:05.247 "w_mbytes_per_sec": 0 00:24:05.247 }, 00:24:05.247 "claimed": false, 00:24:05.247 "zoned": false, 00:24:05.247 "supported_io_types": { 00:24:05.247 "read": true, 00:24:05.247 "write": true, 00:24:05.247 "unmap": false, 00:24:05.247 "flush": true, 00:24:05.247 "reset": true, 00:24:05.247 "nvme_admin": true, 00:24:05.247 "nvme_io": true, 00:24:05.247 "nvme_io_md": false, 00:24:05.247 "write_zeroes": true, 00:24:05.247 "zcopy": false, 00:24:05.247 "get_zone_info": false, 00:24:05.247 "zone_management": false, 00:24:05.247 "zone_append": false, 00:24:05.247 "compare": true, 00:24:05.247 "compare_and_write": true, 00:24:05.247 "abort": true, 00:24:05.247 "seek_hole": false, 00:24:05.247 "seek_data": false, 00:24:05.247 "copy": true, 00:24:05.247 "nvme_iov_md": false 00:24:05.247 }, 00:24:05.247 "memory_domains": [ 00:24:05.247 { 00:24:05.247 "dma_device_id": "system", 00:24:05.247 "dma_device_type": 1 00:24:05.247 } 00:24:05.247 ], 00:24:05.247 "driver_specific": { 00:24:05.247 "nvme": [ 00:24:05.247 { 00:24:05.247 "trid": { 00:24:05.247 "trtype": "TCP", 00:24:05.247 "adrfam": "IPv4", 00:24:05.247 "traddr": "10.0.0.2", 00:24:05.247 "trsvcid": "4420", 00:24:05.247 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:05.247 }, 00:24:05.247 "ctrlr_data": { 00:24:05.247 "cntlid": 1, 00:24:05.247 "vendor_id": "0x8086", 00:24:05.247 "model_number": "SPDK bdev Controller", 00:24:05.247 "serial_number": "00000000000000000000", 00:24:05.247 "firmware_revision": "25.01", 00:24:05.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:05.247 "oacs": { 00:24:05.247 "security": 0, 00:24:05.247 "format": 0, 00:24:05.247 "firmware": 0, 00:24:05.247 "ns_manage": 0 00:24:05.247 }, 00:24:05.247 "multi_ctrlr": true, 00:24:05.247 "ana_reporting": false 00:24:05.247 }, 00:24:05.247 "vs": { 00:24:05.247 "nvme_version": "1.3" 00:24:05.247 }, 00:24:05.247 "ns_data": { 00:24:05.247 "id": 1, 00:24:05.247 "can_share": true 00:24:05.247 } 00:24:05.247 } 00:24:05.247 ], 00:24:05.247 "mp_policy": "active_passive" 00:24:05.247 } 00:24:05.247 } 00:24:05.247 ] 00:24:05.247 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.247 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:05.247 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.247 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.247 [2024-11-06 14:05:51.484821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:05.247 [2024-11-06 14:05:51.484917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd144a0 (9): Bad file descriptor 00:24:05.508 [2024-11-06 14:05:51.616859] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:05.508 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.508 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:05.508 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.508 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.508 [ 00:24:05.508 { 00:24:05.508 "name": "nvme0n1", 00:24:05.508 "aliases": [ 00:24:05.508 "68f961ee-8127-4db3-b65d-468a23bf5a69" 00:24:05.508 ], 00:24:05.508 "product_name": "NVMe disk", 00:24:05.508 "block_size": 512, 00:24:05.508 "num_blocks": 2097152, 00:24:05.508 "uuid": "68f961ee-8127-4db3-b65d-468a23bf5a69", 00:24:05.508 "numa_id": 0, 00:24:05.508 "assigned_rate_limits": { 00:24:05.508 "rw_ios_per_sec": 0, 00:24:05.508 "rw_mbytes_per_sec": 0, 00:24:05.508 "r_mbytes_per_sec": 0, 00:24:05.508 "w_mbytes_per_sec": 0 00:24:05.508 }, 00:24:05.508 "claimed": false, 00:24:05.508 "zoned": false, 00:24:05.508 "supported_io_types": { 00:24:05.508 "read": true, 00:24:05.508 "write": true, 00:24:05.508 "unmap": false, 00:24:05.508 "flush": true, 00:24:05.508 "reset": true, 00:24:05.508 "nvme_admin": true, 00:24:05.508 "nvme_io": true, 00:24:05.508 "nvme_io_md": false, 00:24:05.508 "write_zeroes": true, 00:24:05.508 "zcopy": false, 00:24:05.508 "get_zone_info": false, 00:24:05.508 "zone_management": false, 00:24:05.508 "zone_append": false, 00:24:05.508 "compare": true, 00:24:05.508 "compare_and_write": true, 00:24:05.508 "abort": true, 00:24:05.508 "seek_hole": false, 00:24:05.508 "seek_data": false, 00:24:05.508 "copy": true, 00:24:05.508 "nvme_iov_md": false 00:24:05.508 }, 00:24:05.508 "memory_domains": [ 00:24:05.508 { 00:24:05.508 "dma_device_id": "system", 00:24:05.508 "dma_device_type": 1 00:24:05.508 } 00:24:05.508 ], 00:24:05.508 "driver_specific": { 00:24:05.508 "nvme": [ 00:24:05.508 { 00:24:05.508 "trid": { 00:24:05.508 "trtype": "TCP", 00:24:05.508 "adrfam": "IPv4", 00:24:05.508 "traddr": "10.0.0.2", 00:24:05.508 "trsvcid": "4420", 00:24:05.508 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:05.508 }, 00:24:05.508 "ctrlr_data": { 00:24:05.508 "cntlid": 2, 00:24:05.508 "vendor_id": "0x8086", 00:24:05.508 "model_number": "SPDK bdev Controller", 00:24:05.508 "serial_number": "00000000000000000000", 00:24:05.508 "firmware_revision": "25.01", 00:24:05.508 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:05.509 "oacs": { 00:24:05.509 "security": 0, 00:24:05.509 "format": 0, 00:24:05.509 "firmware": 0, 00:24:05.509 "ns_manage": 0 00:24:05.509 }, 00:24:05.509 "multi_ctrlr": true, 00:24:05.509 "ana_reporting": false 00:24:05.509 }, 00:24:05.509 "vs": { 00:24:05.509 "nvme_version": "1.3" 00:24:05.509 }, 00:24:05.509 "ns_data": { 00:24:05.509 "id": 1, 00:24:05.509 "can_share": true 00:24:05.509 } 00:24:05.509 } 00:24:05.509 ], 00:24:05.509 "mp_policy": "active_passive" 00:24:05.509 } 00:24:05.509 } 00:24:05.509 ] 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.a0n4d1wCmP 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.a0n4d1wCmP 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.a0n4d1wCmP 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.509 [2024-11-06 14:05:51.705540] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:05.509 [2024-11-06 14:05:51.705693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.509 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.509 [2024-11-06 14:05:51.729621] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:05.771 nvme0n1 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.771 [ 00:24:05.771 { 00:24:05.771 "name": "nvme0n1", 00:24:05.771 "aliases": [ 00:24:05.771 "68f961ee-8127-4db3-b65d-468a23bf5a69" 00:24:05.771 ], 00:24:05.771 "product_name": "NVMe disk", 00:24:05.771 "block_size": 512, 00:24:05.771 "num_blocks": 2097152, 00:24:05.771 "uuid": "68f961ee-8127-4db3-b65d-468a23bf5a69", 00:24:05.771 "numa_id": 0, 00:24:05.771 "assigned_rate_limits": { 00:24:05.771 "rw_ios_per_sec": 0, 00:24:05.771 "rw_mbytes_per_sec": 0, 00:24:05.771 "r_mbytes_per_sec": 0, 00:24:05.771 "w_mbytes_per_sec": 0 00:24:05.771 }, 00:24:05.771 "claimed": false, 00:24:05.771 "zoned": false, 00:24:05.771 "supported_io_types": { 00:24:05.771 "read": true, 00:24:05.771 "write": true, 00:24:05.771 "unmap": false, 00:24:05.771 "flush": true, 00:24:05.771 "reset": true, 00:24:05.771 "nvme_admin": true, 00:24:05.771 "nvme_io": true, 00:24:05.771 "nvme_io_md": false, 00:24:05.771 "write_zeroes": true, 00:24:05.771 "zcopy": false, 00:24:05.771 "get_zone_info": false, 00:24:05.771 "zone_management": false, 00:24:05.771 "zone_append": false, 00:24:05.771 "compare": true, 00:24:05.771 "compare_and_write": true, 00:24:05.771 "abort": true, 00:24:05.771 "seek_hole": false, 00:24:05.771 "seek_data": false, 00:24:05.771 "copy": true, 00:24:05.771 "nvme_iov_md": false 00:24:05.771 }, 00:24:05.771 "memory_domains": [ 00:24:05.771 { 00:24:05.771 "dma_device_id": "system", 00:24:05.771 "dma_device_type": 1 00:24:05.771 } 00:24:05.771 ], 00:24:05.771 "driver_specific": { 00:24:05.771 "nvme": [ 00:24:05.771 { 00:24:05.771 "trid": { 00:24:05.771 "trtype": "TCP", 00:24:05.771 "adrfam": "IPv4", 00:24:05.771 "traddr": "10.0.0.2", 00:24:05.771 "trsvcid": "4421", 00:24:05.771 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:05.771 }, 00:24:05.771 "ctrlr_data": { 00:24:05.771 "cntlid": 3, 00:24:05.771 "vendor_id": "0x8086", 00:24:05.771 "model_number": "SPDK bdev Controller", 00:24:05.771 "serial_number": "00000000000000000000", 00:24:05.771 "firmware_revision": "25.01", 00:24:05.771 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:05.771 "oacs": { 00:24:05.771 "security": 0, 00:24:05.771 "format": 0, 00:24:05.771 "firmware": 0, 00:24:05.771 "ns_manage": 0 00:24:05.771 }, 00:24:05.771 "multi_ctrlr": true, 00:24:05.771 "ana_reporting": false 00:24:05.771 }, 00:24:05.771 "vs": { 00:24:05.771 "nvme_version": "1.3" 00:24:05.771 }, 00:24:05.771 "ns_data": { 00:24:05.771 "id": 1, 00:24:05.771 "can_share": true 00:24:05.771 } 00:24:05.771 } 00:24:05.771 ], 00:24:05.771 "mp_policy": "active_passive" 00:24:05.771 } 00:24:05.771 } 00:24:05.771 ] 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.a0n4d1wCmP 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:05.771 rmmod nvme_tcp 00:24:05.771 rmmod nvme_fabrics 00:24:05.771 rmmod nvme_keyring 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2499929 ']' 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2499929 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 2499929 ']' 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 2499929 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2499929 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2499929' 00:24:05.771 killing process with pid 2499929 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 2499929 00:24:05.771 14:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 2499929 00:24:06.032 14:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:06.032 14:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:06.032 14:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:06.032 14:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:06.032 14:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:06.032 14:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:06.032 14:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:06.032 14:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:06.032 14:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:06.032 14:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.032 14:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.032 14:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.945 14:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:07.945 00:24:07.945 real 0m11.879s 00:24:07.945 user 0m4.302s 00:24:07.945 sys 0m6.130s 00:24:07.945 14:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:07.945 14:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.945 ************************************ 00:24:07.945 END TEST nvmf_async_init 00:24:07.945 ************************************ 00:24:08.206 14:05:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:08.206 14:05:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:08.206 14:05:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:08.206 14:05:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.206 ************************************ 00:24:08.206 START TEST dma 00:24:08.206 ************************************ 00:24:08.206 14:05:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:08.206 * Looking for test storage... 00:24:08.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.206 14:05:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:08.206 14:05:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:24:08.206 14:05:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:08.467 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:08.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.468 --rc genhtml_branch_coverage=1 00:24:08.468 --rc genhtml_function_coverage=1 00:24:08.468 --rc genhtml_legend=1 00:24:08.468 --rc geninfo_all_blocks=1 00:24:08.468 --rc geninfo_unexecuted_blocks=1 00:24:08.468 00:24:08.468 ' 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:08.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.468 --rc genhtml_branch_coverage=1 00:24:08.468 --rc genhtml_function_coverage=1 00:24:08.468 --rc genhtml_legend=1 00:24:08.468 --rc geninfo_all_blocks=1 00:24:08.468 --rc geninfo_unexecuted_blocks=1 00:24:08.468 00:24:08.468 ' 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:08.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.468 --rc genhtml_branch_coverage=1 00:24:08.468 --rc genhtml_function_coverage=1 00:24:08.468 --rc genhtml_legend=1 00:24:08.468 --rc geninfo_all_blocks=1 00:24:08.468 --rc geninfo_unexecuted_blocks=1 00:24:08.468 00:24:08.468 ' 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:08.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.468 --rc genhtml_branch_coverage=1 00:24:08.468 --rc genhtml_function_coverage=1 00:24:08.468 --rc genhtml_legend=1 00:24:08.468 --rc geninfo_all_blocks=1 00:24:08.468 --rc geninfo_unexecuted_blocks=1 00:24:08.468 00:24:08.468 ' 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:08.468 00:24:08.468 real 0m0.241s 00:24:08.468 user 0m0.143s 00:24:08.468 sys 0m0.113s 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:08.468 ************************************ 00:24:08.468 END TEST dma 00:24:08.468 ************************************ 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.468 ************************************ 00:24:08.468 START TEST nvmf_identify 00:24:08.468 ************************************ 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:08.468 * Looking for test storage... 00:24:08.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:24:08.468 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:08.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.730 --rc genhtml_branch_coverage=1 00:24:08.730 --rc genhtml_function_coverage=1 00:24:08.730 --rc genhtml_legend=1 00:24:08.730 --rc geninfo_all_blocks=1 00:24:08.730 --rc geninfo_unexecuted_blocks=1 00:24:08.730 00:24:08.730 ' 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:08.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.730 --rc genhtml_branch_coverage=1 00:24:08.730 --rc genhtml_function_coverage=1 00:24:08.730 --rc genhtml_legend=1 00:24:08.730 --rc geninfo_all_blocks=1 00:24:08.730 --rc geninfo_unexecuted_blocks=1 00:24:08.730 00:24:08.730 ' 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:08.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.730 --rc genhtml_branch_coverage=1 00:24:08.730 --rc genhtml_function_coverage=1 00:24:08.730 --rc genhtml_legend=1 00:24:08.730 --rc geninfo_all_blocks=1 00:24:08.730 --rc geninfo_unexecuted_blocks=1 00:24:08.730 00:24:08.730 ' 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:08.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.730 --rc genhtml_branch_coverage=1 00:24:08.730 --rc genhtml_function_coverage=1 00:24:08.730 --rc genhtml_legend=1 00:24:08.730 --rc geninfo_all_blocks=1 00:24:08.730 --rc geninfo_unexecuted_blocks=1 00:24:08.730 00:24:08.730 ' 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:08.730 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:08.731 14:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:16.869 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:16.869 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:16.869 Found net devices under 0000:31:00.0: cvl_0_0 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:16.869 Found net devices under 0000:31:00.1: cvl_0_1 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:16.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:24:16.869 00:24:16.869 --- 10.0.0.2 ping statistics --- 00:24:16.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.869 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:24:16.869 00:24:16.869 --- 10.0.0.1 ping statistics --- 00:24:16.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.869 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:16.869 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2504684 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2504684 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 2504684 ']' 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:16.870 14:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:16.870 [2024-11-06 14:06:02.605295] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:24:16.870 [2024-11-06 14:06:02.605359] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.870 [2024-11-06 14:06:02.705105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:16.870 [2024-11-06 14:06:02.759978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.870 [2024-11-06 14:06:02.760035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.870 [2024-11-06 14:06:02.760045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.870 [2024-11-06 14:06:02.760054] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.870 [2024-11-06 14:06:02.760062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.870 [2024-11-06 14:06:02.762419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.870 [2024-11-06 14:06:02.762570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.870 [2024-11-06 14:06:02.762732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:16.870 [2024-11-06 14:06:02.762733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.443 [2024-11-06 14:06:03.436790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.443 Malloc0 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.443 [2024-11-06 14:06:03.556753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.443 [ 00:24:17.443 { 00:24:17.443 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:17.443 "subtype": "Discovery", 00:24:17.443 "listen_addresses": [ 00:24:17.443 { 00:24:17.443 "trtype": "TCP", 00:24:17.443 "adrfam": "IPv4", 00:24:17.443 "traddr": "10.0.0.2", 00:24:17.443 "trsvcid": "4420" 00:24:17.443 } 00:24:17.443 ], 00:24:17.443 "allow_any_host": true, 00:24:17.443 "hosts": [] 00:24:17.443 }, 00:24:17.443 { 00:24:17.443 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.443 "subtype": "NVMe", 00:24:17.443 "listen_addresses": [ 00:24:17.443 { 00:24:17.443 "trtype": "TCP", 00:24:17.443 "adrfam": "IPv4", 00:24:17.443 "traddr": "10.0.0.2", 00:24:17.443 "trsvcid": "4420" 00:24:17.443 } 00:24:17.443 ], 00:24:17.443 "allow_any_host": true, 00:24:17.443 "hosts": [], 00:24:17.443 "serial_number": "SPDK00000000000001", 00:24:17.443 "model_number": "SPDK bdev Controller", 00:24:17.443 "max_namespaces": 32, 00:24:17.443 "min_cntlid": 1, 00:24:17.443 "max_cntlid": 65519, 00:24:17.443 "namespaces": [ 00:24:17.443 { 00:24:17.443 "nsid": 1, 00:24:17.443 "bdev_name": "Malloc0", 00:24:17.443 "name": "Malloc0", 00:24:17.443 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:17.443 "eui64": "ABCDEF0123456789", 00:24:17.443 "uuid": "972752f9-ab7a-4056-a5be-12c8f58e0cc5" 00:24:17.443 } 00:24:17.443 ] 00:24:17.443 } 00:24:17.443 ] 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.443 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:17.443 [2024-11-06 14:06:03.619992] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:24:17.443 [2024-11-06 14:06:03.620051] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504835 ] 00:24:17.443 [2024-11-06 14:06:03.676447] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:17.443 [2024-11-06 14:06:03.676521] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:17.443 [2024-11-06 14:06:03.676527] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:17.443 [2024-11-06 14:06:03.676547] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:17.443 [2024-11-06 14:06:03.676561] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:17.443 [2024-11-06 14:06:03.680248] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:17.443 [2024-11-06 14:06:03.680293] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd50550 0 00:24:17.443 [2024-11-06 14:06:03.687758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:17.443 [2024-11-06 14:06:03.687776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:17.443 [2024-11-06 14:06:03.687782] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:17.443 [2024-11-06 14:06:03.687785] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:17.443 [2024-11-06 14:06:03.687831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.443 [2024-11-06 14:06:03.687838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.443 [2024-11-06 14:06:03.687843] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50550) 00:24:17.443 [2024-11-06 14:06:03.687861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:17.443 [2024-11-06 14:06:03.687886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:24:17.443 [2024-11-06 14:06:03.694760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.443 [2024-11-06 14:06:03.694771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.443 [2024-11-06 14:06:03.694775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.443 [2024-11-06 14:06:03.694781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50550 00:24:17.443 [2024-11-06 14:06:03.694793] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:17.443 [2024-11-06 14:06:03.694802] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:17.443 [2024-11-06 14:06:03.694808] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:17.443 [2024-11-06 14:06:03.694825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.443 [2024-11-06 14:06:03.694829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.443 [2024-11-06 14:06:03.694833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50550) 00:24:17.444 [2024-11-06 14:06:03.694842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.444 [2024-11-06 14:06:03.694858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:24:17.444 [2024-11-06 14:06:03.695095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.444 [2024-11-06 14:06:03.695106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.444 [2024-11-06 14:06:03.695110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.695114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50550 00:24:17.444 [2024-11-06 14:06:03.695121] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:17.444 [2024-11-06 14:06:03.695130] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:17.444 [2024-11-06 14:06:03.695137] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.695141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.695145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50550) 00:24:17.444 [2024-11-06 14:06:03.695152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.444 [2024-11-06 14:06:03.695163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:24:17.444 [2024-11-06 14:06:03.695383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.444 [2024-11-06 14:06:03.695389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.444 [2024-11-06 14:06:03.695393] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.695397] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50550 00:24:17.444 [2024-11-06 14:06:03.695402] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:17.444 [2024-11-06 14:06:03.695412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:17.444 [2024-11-06 14:06:03.695419] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.695423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.695426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50550) 00:24:17.444 [2024-11-06 14:06:03.695433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.444 [2024-11-06 14:06:03.695444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:24:17.444 [2024-11-06 14:06:03.695628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.444 [2024-11-06 14:06:03.695634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.444 [2024-11-06 14:06:03.695638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.695641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50550 00:24:17.444 [2024-11-06 14:06:03.695647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:17.444 [2024-11-06 14:06:03.695657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.695661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.695665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50550) 00:24:17.444 [2024-11-06 14:06:03.695672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.444 [2024-11-06 14:06:03.695682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:24:17.444 [2024-11-06 14:06:03.695902] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.444 [2024-11-06 14:06:03.695909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.444 [2024-11-06 14:06:03.695913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.695919] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50550 00:24:17.444 [2024-11-06 14:06:03.695925] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:17.444 [2024-11-06 14:06:03.695930] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:17.444 [2024-11-06 14:06:03.695938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:17.444 [2024-11-06 14:06:03.696047] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:17.444 [2024-11-06 14:06:03.696052] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:17.444 [2024-11-06 14:06:03.696064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.696068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.696071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50550) 00:24:17.444 [2024-11-06 14:06:03.696078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.444 [2024-11-06 14:06:03.696089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:24:17.444 [2024-11-06 14:06:03.696284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.444 [2024-11-06 14:06:03.696290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.444 [2024-11-06 14:06:03.696294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.696298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50550 00:24:17.444 [2024-11-06 14:06:03.696303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:17.444 [2024-11-06 14:06:03.696313] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.696317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.696320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50550) 00:24:17.444 [2024-11-06 14:06:03.696327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.444 [2024-11-06 14:06:03.696337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:24:17.444 [2024-11-06 14:06:03.696539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.444 [2024-11-06 14:06:03.696545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.444 [2024-11-06 14:06:03.696548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.696552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50550 00:24:17.444 [2024-11-06 14:06:03.696557] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:17.444 [2024-11-06 14:06:03.696562] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:17.444 [2024-11-06 14:06:03.696570] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:17.444 [2024-11-06 14:06:03.696581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:17.444 [2024-11-06 14:06:03.696591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.696595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50550) 00:24:17.444 [2024-11-06 14:06:03.696605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.444 [2024-11-06 14:06:03.696615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:24:17.444 [2024-11-06 14:06:03.696882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.444 [2024-11-06 14:06:03.696889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.444 [2024-11-06 14:06:03.696893] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.696898] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd50550): datao=0, datal=4096, cccid=0 00:24:17.444 [2024-11-06 14:06:03.696903] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdb2100) on tqpair(0xd50550): expected_datao=0, payload_size=4096 00:24:17.444 [2024-11-06 14:06:03.696908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.696917] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.696922] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.697064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.444 [2024-11-06 14:06:03.697071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.444 [2024-11-06 14:06:03.697074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.697078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50550 00:24:17.444 [2024-11-06 14:06:03.697087] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:17.444 [2024-11-06 14:06:03.697092] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:17.444 [2024-11-06 14:06:03.697097] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:17.444 [2024-11-06 14:06:03.697106] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:17.444 [2024-11-06 14:06:03.697111] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:17.444 [2024-11-06 14:06:03.697116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:17.444 [2024-11-06 14:06:03.697127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:17.444 [2024-11-06 14:06:03.697134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.697138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.697142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50550) 00:24:17.444 [2024-11-06 14:06:03.697149] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:17.444 [2024-11-06 14:06:03.697160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:24:17.444 [2024-11-06 14:06:03.697345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.444 [2024-11-06 14:06:03.697351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.444 [2024-11-06 14:06:03.697354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.444 [2024-11-06 14:06:03.697358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50550 00:24:17.445 [2024-11-06 14:06:03.697367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.445 [2024-11-06 14:06:03.697371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.445 [2024-11-06 14:06:03.697375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50550) 00:24:17.445 [2024-11-06 14:06:03.697381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.445 [2024-11-06 14:06:03.697390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.445 [2024-11-06 14:06:03.697394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.445 [2024-11-06 14:06:03.697398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd50550) 00:24:17.445 [2024-11-06 14:06:03.697403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.445 [2024-11-06 14:06:03.697410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.445 [2024-11-06 14:06:03.697413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.445 [2024-11-06 14:06:03.697417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd50550) 00:24:17.445 [2024-11-06 14:06:03.697423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.445 [2024-11-06 14:06:03.697429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.445 [2024-11-06 14:06:03.697433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.445 [2024-11-06 14:06:03.697436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50550) 00:24:17.445 [2024-11-06 14:06:03.697442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.445 [2024-11-06 14:06:03.697447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:17.445 [2024-11-06 14:06:03.697456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:17.445 [2024-11-06 14:06:03.697463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.445 [2024-11-06 14:06:03.697466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd50550) 00:24:17.445 [2024-11-06 14:06:03.697473] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.445 [2024-11-06 14:06:03.697485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:24:17.445 [2024-11-06 14:06:03.697490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2280, cid 1, qid 0 00:24:17.445 [2024-11-06 14:06:03.697495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2400, cid 2, qid 0 00:24:17.445 [2024-11-06 14:06:03.697500] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:24:17.445 [2024-11-06 14:06:03.697504] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2700, cid 4, qid 0 00:24:17.445 [2024-11-06 14:06:03.697767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.445 [2024-11-06 14:06:03.697774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.445 [2024-11-06 14:06:03.697777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.445 [2024-11-06 14:06:03.697781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2700) on tqpair=0xd50550 00:24:17.445 [2024-11-06 14:06:03.697790] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:17.445 [2024-11-06 14:06:03.697796] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:17.445 [2024-11-06 14:06:03.697806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.445 [2024-11-06 14:06:03.697810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd50550) 00:24:17.445 [2024-11-06 14:06:03.697817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.445 [2024-11-06 14:06:03.697827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2700, cid 4, qid 0 00:24:17.445 [2024-11-06 14:06:03.698026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.445 [2024-11-06 14:06:03.698033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.445 [2024-11-06 14:06:03.698037] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.445 [2024-11-06 14:06:03.698040] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd50550): datao=0, datal=4096, cccid=4 00:24:17.445 [2024-11-06 14:06:03.698045] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdb2700) on tqpair(0xd50550): expected_datao=0, payload_size=4096 00:24:17.445 [2024-11-06 14:06:03.698049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.445 [2024-11-06 14:06:03.698062] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.445 [2024-11-06 14:06:03.698066] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.709 [2024-11-06 14:06:03.739951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.709 [2024-11-06 14:06:03.739965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.709 [2024-11-06 14:06:03.739970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.709 [2024-11-06 14:06:03.739974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2700) on tqpair=0xd50550 00:24:17.709 [2024-11-06 14:06:03.739991] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:17.709 [2024-11-06 14:06:03.740025] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.709 [2024-11-06 14:06:03.740030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd50550) 00:24:17.709 [2024-11-06 14:06:03.740038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.709 [2024-11-06 14:06:03.740046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.709 [2024-11-06 14:06:03.740050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.709 [2024-11-06 14:06:03.740054] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd50550) 00:24:17.709 [2024-11-06 14:06:03.740060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.709 [2024-11-06 14:06:03.740080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2700, cid 4, qid 0 00:24:17.709 [2024-11-06 14:06:03.740085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2880, cid 5, qid 0 00:24:17.709 [2024-11-06 14:06:03.740290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.709 [2024-11-06 14:06:03.740297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.709 [2024-11-06 14:06:03.740301] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.709 [2024-11-06 14:06:03.740305] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd50550): datao=0, datal=1024, cccid=4 00:24:17.709 [2024-11-06 14:06:03.740310] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdb2700) on tqpair(0xd50550): expected_datao=0, payload_size=1024 00:24:17.709 [2024-11-06 14:06:03.740314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.709 [2024-11-06 14:06:03.740321] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.709 [2024-11-06 14:06:03.740325] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.709 [2024-11-06 14:06:03.740331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.709 [2024-11-06 14:06:03.740337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.709 [2024-11-06 14:06:03.740340] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.709 [2024-11-06 14:06:03.740344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2880) on tqpair=0xd50550 00:24:17.709 [2024-11-06 14:06:03.781075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.709 [2024-11-06 14:06:03.781086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.709 [2024-11-06 14:06:03.781089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.709 [2024-11-06 14:06:03.781098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2700) on tqpair=0xd50550 00:24:17.709 [2024-11-06 14:06:03.781112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.709 [2024-11-06 14:06:03.781116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd50550) 00:24:17.709 [2024-11-06 14:06:03.781124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.709 [2024-11-06 14:06:03.781140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2700, cid 4, qid 0 00:24:17.709 [2024-11-06 14:06:03.781383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.710 [2024-11-06 14:06:03.781390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.710 [2024-11-06 14:06:03.781393] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.710 [2024-11-06 14:06:03.781397] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd50550): datao=0, datal=3072, cccid=4 00:24:17.710 [2024-11-06 14:06:03.781402] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdb2700) on tqpair(0xd50550): expected_datao=0, payload_size=3072 00:24:17.710 [2024-11-06 14:06:03.781406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.710 [2024-11-06 14:06:03.781413] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.710 [2024-11-06 14:06:03.781417] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.710 [2024-11-06 14:06:03.781544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.710 [2024-11-06 14:06:03.781550] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.710 [2024-11-06 14:06:03.781554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.710 [2024-11-06 14:06:03.781558] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2700) on tqpair=0xd50550 00:24:17.710 [2024-11-06 14:06:03.781567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.710 [2024-11-06 14:06:03.781570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd50550) 00:24:17.710 [2024-11-06 14:06:03.781577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.710 [2024-11-06 14:06:03.781591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2700, cid 4, qid 0 00:24:17.710 [2024-11-06 14:06:03.781826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.710 [2024-11-06 14:06:03.781833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.710 [2024-11-06 14:06:03.781837] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.710 [2024-11-06 14:06:03.781840] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd50550): datao=0, datal=8, cccid=4 00:24:17.710 [2024-11-06 14:06:03.781845] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdb2700) on tqpair(0xd50550): expected_datao=0, payload_size=8 00:24:17.710 [2024-11-06 14:06:03.781849] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.710 [2024-11-06 14:06:03.781856] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.710 [2024-11-06 14:06:03.781859] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.710 [2024-11-06 14:06:03.822942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.710 [2024-11-06 14:06:03.822953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.710 [2024-11-06 14:06:03.822957] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.710 [2024-11-06 14:06:03.822961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2700) on tqpair=0xd50550 00:24:17.710 ===================================================== 00:24:17.710 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:17.710 ===================================================== 00:24:17.710 Controller Capabilities/Features 00:24:17.710 ================================ 00:24:17.710 Vendor ID: 0000 00:24:17.710 Subsystem Vendor ID: 0000 00:24:17.710 Serial Number: .................... 00:24:17.710 Model Number: ........................................ 00:24:17.710 Firmware Version: 25.01 00:24:17.710 Recommended Arb Burst: 0 00:24:17.710 IEEE OUI Identifier: 00 00 00 00:24:17.710 Multi-path I/O 00:24:17.710 May have multiple subsystem ports: No 00:24:17.710 May have multiple controllers: No 00:24:17.710 Associated with SR-IOV VF: No 00:24:17.710 Max Data Transfer Size: 131072 00:24:17.710 Max Number of Namespaces: 0 00:24:17.710 Max Number of I/O Queues: 1024 00:24:17.710 NVMe Specification Version (VS): 1.3 00:24:17.710 NVMe Specification Version (Identify): 1.3 00:24:17.710 Maximum Queue Entries: 128 00:24:17.710 Contiguous Queues Required: Yes 00:24:17.710 Arbitration Mechanisms Supported 00:24:17.710 Weighted Round Robin: Not Supported 00:24:17.710 Vendor Specific: Not Supported 00:24:17.710 Reset Timeout: 15000 ms 00:24:17.710 Doorbell Stride: 4 bytes 00:24:17.710 NVM Subsystem Reset: Not Supported 00:24:17.710 Command Sets Supported 00:24:17.710 NVM Command Set: Supported 00:24:17.710 Boot Partition: Not Supported 00:24:17.710 Memory Page Size Minimum: 4096 bytes 00:24:17.710 Memory Page Size Maximum: 4096 bytes 00:24:17.710 Persistent Memory Region: Not Supported 00:24:17.710 Optional Asynchronous Events Supported 00:24:17.710 Namespace Attribute Notices: Not Supported 00:24:17.710 Firmware Activation Notices: Not Supported 00:24:17.710 ANA Change Notices: Not Supported 00:24:17.710 PLE Aggregate Log Change Notices: Not Supported 00:24:17.710 LBA Status Info Alert Notices: Not Supported 00:24:17.710 EGE Aggregate Log Change Notices: Not Supported 00:24:17.710 Normal NVM Subsystem Shutdown event: Not Supported 00:24:17.710 Zone Descriptor Change Notices: Not Supported 00:24:17.710 Discovery Log Change Notices: Supported 00:24:17.710 Controller Attributes 00:24:17.710 128-bit Host Identifier: Not Supported 00:24:17.710 Non-Operational Permissive Mode: Not Supported 00:24:17.710 NVM Sets: Not Supported 00:24:17.710 Read Recovery Levels: Not Supported 00:24:17.710 Endurance Groups: Not Supported 00:24:17.710 Predictable Latency Mode: Not Supported 00:24:17.710 Traffic Based Keep ALive: Not Supported 00:24:17.710 Namespace Granularity: Not Supported 00:24:17.710 SQ Associations: Not Supported 00:24:17.710 UUID List: Not Supported 00:24:17.710 Multi-Domain Subsystem: Not Supported 00:24:17.710 Fixed Capacity Management: Not Supported 00:24:17.710 Variable Capacity Management: Not Supported 00:24:17.710 Delete Endurance Group: Not Supported 00:24:17.710 Delete NVM Set: Not Supported 00:24:17.710 Extended LBA Formats Supported: Not Supported 00:24:17.710 Flexible Data Placement Supported: Not Supported 00:24:17.710 00:24:17.710 Controller Memory Buffer Support 00:24:17.710 ================================ 00:24:17.710 Supported: No 00:24:17.710 00:24:17.710 Persistent Memory Region Support 00:24:17.710 ================================ 00:24:17.710 Supported: No 00:24:17.710 00:24:17.710 Admin Command Set Attributes 00:24:17.710 ============================ 00:24:17.710 Security Send/Receive: Not Supported 00:24:17.710 Format NVM: Not Supported 00:24:17.710 Firmware Activate/Download: Not Supported 00:24:17.710 Namespace Management: Not Supported 00:24:17.710 Device Self-Test: Not Supported 00:24:17.710 Directives: Not Supported 00:24:17.710 NVMe-MI: Not Supported 00:24:17.710 Virtualization Management: Not Supported 00:24:17.710 Doorbell Buffer Config: Not Supported 00:24:17.710 Get LBA Status Capability: Not Supported 00:24:17.710 Command & Feature Lockdown Capability: Not Supported 00:24:17.710 Abort Command Limit: 1 00:24:17.710 Async Event Request Limit: 4 00:24:17.710 Number of Firmware Slots: N/A 00:24:17.710 Firmware Slot 1 Read-Only: N/A 00:24:17.710 Firmware Activation Without Reset: N/A 00:24:17.710 Multiple Update Detection Support: N/A 00:24:17.710 Firmware Update Granularity: No Information Provided 00:24:17.710 Per-Namespace SMART Log: No 00:24:17.710 Asymmetric Namespace Access Log Page: Not Supported 00:24:17.710 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:17.710 Command Effects Log Page: Not Supported 00:24:17.710 Get Log Page Extended Data: Supported 00:24:17.710 Telemetry Log Pages: Not Supported 00:24:17.710 Persistent Event Log Pages: Not Supported 00:24:17.710 Supported Log Pages Log Page: May Support 00:24:17.710 Commands Supported & Effects Log Page: Not Supported 00:24:17.710 Feature Identifiers & Effects Log Page:May Support 00:24:17.710 NVMe-MI Commands & Effects Log Page: May Support 00:24:17.710 Data Area 4 for Telemetry Log: Not Supported 00:24:17.710 Error Log Page Entries Supported: 128 00:24:17.710 Keep Alive: Not Supported 00:24:17.710 00:24:17.710 NVM Command Set Attributes 00:24:17.710 ========================== 00:24:17.710 Submission Queue Entry Size 00:24:17.710 Max: 1 00:24:17.710 Min: 1 00:24:17.710 Completion Queue Entry Size 00:24:17.710 Max: 1 00:24:17.710 Min: 1 00:24:17.710 Number of Namespaces: 0 00:24:17.710 Compare Command: Not Supported 00:24:17.710 Write Uncorrectable Command: Not Supported 00:24:17.710 Dataset Management Command: Not Supported 00:24:17.710 Write Zeroes Command: Not Supported 00:24:17.710 Set Features Save Field: Not Supported 00:24:17.710 Reservations: Not Supported 00:24:17.710 Timestamp: Not Supported 00:24:17.710 Copy: Not Supported 00:24:17.710 Volatile Write Cache: Not Present 00:24:17.710 Atomic Write Unit (Normal): 1 00:24:17.710 Atomic Write Unit (PFail): 1 00:24:17.710 Atomic Compare & Write Unit: 1 00:24:17.710 Fused Compare & Write: Supported 00:24:17.710 Scatter-Gather List 00:24:17.710 SGL Command Set: Supported 00:24:17.711 SGL Keyed: Supported 00:24:17.711 SGL Bit Bucket Descriptor: Not Supported 00:24:17.711 SGL Metadata Pointer: Not Supported 00:24:17.711 Oversized SGL: Not Supported 00:24:17.711 SGL Metadata Address: Not Supported 00:24:17.711 SGL Offset: Supported 00:24:17.711 Transport SGL Data Block: Not Supported 00:24:17.711 Replay Protected Memory Block: Not Supported 00:24:17.711 00:24:17.711 Firmware Slot Information 00:24:17.711 ========================= 00:24:17.711 Active slot: 0 00:24:17.711 00:24:17.711 00:24:17.711 Error Log 00:24:17.711 ========= 00:24:17.711 00:24:17.711 Active Namespaces 00:24:17.711 ================= 00:24:17.711 Discovery Log Page 00:24:17.711 ================== 00:24:17.711 Generation Counter: 2 00:24:17.711 Number of Records: 2 00:24:17.711 Record Format: 0 00:24:17.711 00:24:17.711 Discovery Log Entry 0 00:24:17.711 ---------------------- 00:24:17.711 Transport Type: 3 (TCP) 00:24:17.711 Address Family: 1 (IPv4) 00:24:17.711 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:17.711 Entry Flags: 00:24:17.711 Duplicate Returned Information: 1 00:24:17.711 Explicit Persistent Connection Support for Discovery: 1 00:24:17.711 Transport Requirements: 00:24:17.711 Secure Channel: Not Required 00:24:17.711 Port ID: 0 (0x0000) 00:24:17.711 Controller ID: 65535 (0xffff) 00:24:17.711 Admin Max SQ Size: 128 00:24:17.711 Transport Service Identifier: 4420 00:24:17.711 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:17.711 Transport Address: 10.0.0.2 00:24:17.711 Discovery Log Entry 1 00:24:17.711 ---------------------- 00:24:17.711 Transport Type: 3 (TCP) 00:24:17.711 Address Family: 1 (IPv4) 00:24:17.711 Subsystem Type: 2 (NVM Subsystem) 00:24:17.711 Entry Flags: 00:24:17.711 Duplicate Returned Information: 0 00:24:17.711 Explicit Persistent Connection Support for Discovery: 0 00:24:17.711 Transport Requirements: 00:24:17.711 Secure Channel: Not Required 00:24:17.711 Port ID: 0 (0x0000) 00:24:17.711 Controller ID: 65535 (0xffff) 00:24:17.711 Admin Max SQ Size: 128 00:24:17.711 Transport Service Identifier: 4420 00:24:17.711 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:17.711 Transport Address: 10.0.0.2 [2024-11-06 14:06:03.823065] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:17.711 [2024-11-06 14:06:03.823078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50550 00:24:17.711 [2024-11-06 14:06:03.823086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.711 [2024-11-06 14:06:03.823094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2280) on tqpair=0xd50550 00:24:17.711 [2024-11-06 14:06:03.823098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.711 [2024-11-06 14:06:03.823103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2400) on tqpair=0xd50550 00:24:17.711 [2024-11-06 14:06:03.823108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.711 [2024-11-06 14:06:03.823113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50550 00:24:17.711 [2024-11-06 14:06:03.823118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.711 [2024-11-06 14:06:03.823130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.823134] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.823138] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50550) 00:24:17.711 [2024-11-06 14:06:03.823146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.711 [2024-11-06 14:06:03.823161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:24:17.711 [2024-11-06 14:06:03.823256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.711 [2024-11-06 14:06:03.823262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.711 [2024-11-06 14:06:03.823266] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.823270] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50550 00:24:17.711 [2024-11-06 14:06:03.823277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.823281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.823285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50550) 00:24:17.711 [2024-11-06 14:06:03.823291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.711 [2024-11-06 14:06:03.823305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:24:17.711 [2024-11-06 14:06:03.823545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.711 [2024-11-06 14:06:03.823552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.711 [2024-11-06 14:06:03.823555] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.823559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50550 00:24:17.711 [2024-11-06 14:06:03.823565] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:17.711 [2024-11-06 14:06:03.823570] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:17.711 [2024-11-06 14:06:03.823581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.823585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.823588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50550) 00:24:17.711 [2024-11-06 14:06:03.823595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.711 [2024-11-06 14:06:03.823606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:24:17.711 [2024-11-06 14:06:03.823864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.711 [2024-11-06 14:06:03.823872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.711 [2024-11-06 14:06:03.823878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.823883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50550 00:24:17.711 [2024-11-06 14:06:03.823897] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.823901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.823905] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50550) 00:24:17.711 [2024-11-06 14:06:03.823914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.711 [2024-11-06 14:06:03.823926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:24:17.711 [2024-11-06 14:06:03.824130] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.711 [2024-11-06 14:06:03.824137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.711 [2024-11-06 14:06:03.824140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.824144] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50550 00:24:17.711 [2024-11-06 14:06:03.824155] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.824159] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.824163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50550) 00:24:17.711 [2024-11-06 14:06:03.824169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.711 [2024-11-06 14:06:03.824180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:24:17.711 [2024-11-06 14:06:03.824399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.711 [2024-11-06 14:06:03.824405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.711 [2024-11-06 14:06:03.824409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.824413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50550 00:24:17.711 [2024-11-06 14:06:03.824424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.824428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.824431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50550) 00:24:17.711 [2024-11-06 14:06:03.824438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.711 [2024-11-06 14:06:03.824448] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:24:17.711 [2024-11-06 14:06:03.824633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.711 [2024-11-06 14:06:03.824642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.711 [2024-11-06 14:06:03.824647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.824651] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50550 00:24:17.711 [2024-11-06 14:06:03.824661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.824665] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.711 [2024-11-06 14:06:03.824669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50550) 00:24:17.711 [2024-11-06 14:06:03.824677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.711 [2024-11-06 14:06:03.824688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:24:17.712 [2024-11-06 14:06:03.828791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.712 [2024-11-06 14:06:03.828802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.712 [2024-11-06 14:06:03.828807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.828811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50550 00:24:17.712 [2024-11-06 14:06:03.828819] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:24:17.712 00:24:17.712 14:06:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:17.712 [2024-11-06 14:06:03.875757] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:24:17.712 [2024-11-06 14:06:03.875806] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504841 ] 00:24:17.712 [2024-11-06 14:06:03.932274] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:17.712 [2024-11-06 14:06:03.932344] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:17.712 [2024-11-06 14:06:03.932349] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:17.712 [2024-11-06 14:06:03.932365] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:17.712 [2024-11-06 14:06:03.932378] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:17.712 [2024-11-06 14:06:03.936061] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:17.712 [2024-11-06 14:06:03.936099] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1963550 0 00:24:17.712 [2024-11-06 14:06:03.943764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:17.712 [2024-11-06 14:06:03.943779] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:17.712 [2024-11-06 14:06:03.943784] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:17.712 [2024-11-06 14:06:03.943788] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:17.712 [2024-11-06 14:06:03.943821] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.943826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.943831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1963550) 00:24:17.712 [2024-11-06 14:06:03.943845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:17.712 [2024-11-06 14:06:03.943866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5100, cid 0, qid 0 00:24:17.712 [2024-11-06 14:06:03.951763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.712 [2024-11-06 14:06:03.951773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.712 [2024-11-06 14:06:03.951777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.951781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5100) on tqpair=0x1963550 00:24:17.712 [2024-11-06 14:06:03.951794] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:17.712 [2024-11-06 14:06:03.951801] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:17.712 [2024-11-06 14:06:03.951807] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:17.712 [2024-11-06 14:06:03.951821] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.951825] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.951829] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1963550) 00:24:17.712 [2024-11-06 14:06:03.951837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.712 [2024-11-06 14:06:03.951857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5100, cid 0, qid 0 00:24:17.712 [2024-11-06 14:06:03.952014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.712 [2024-11-06 14:06:03.952022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.712 [2024-11-06 14:06:03.952026] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.952030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5100) on tqpair=0x1963550 00:24:17.712 [2024-11-06 14:06:03.952036] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:17.712 [2024-11-06 14:06:03.952043] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:17.712 [2024-11-06 14:06:03.952050] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.952054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.952058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1963550) 00:24:17.712 [2024-11-06 14:06:03.952065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.712 [2024-11-06 14:06:03.952075] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5100, cid 0, qid 0 00:24:17.712 [2024-11-06 14:06:03.952239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.712 [2024-11-06 14:06:03.952245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.712 [2024-11-06 14:06:03.952249] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.952252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5100) on tqpair=0x1963550 00:24:17.712 [2024-11-06 14:06:03.952258] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:17.712 [2024-11-06 14:06:03.952267] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:17.712 [2024-11-06 14:06:03.952273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.952277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.952281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1963550) 00:24:17.712 [2024-11-06 14:06:03.952287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.712 [2024-11-06 14:06:03.952297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5100, cid 0, qid 0 00:24:17.712 [2024-11-06 14:06:03.952561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.712 [2024-11-06 14:06:03.952568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.712 [2024-11-06 14:06:03.952571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.952575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5100) on tqpair=0x1963550 00:24:17.712 [2024-11-06 14:06:03.952580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:17.712 [2024-11-06 14:06:03.952590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.952593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.952597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1963550) 00:24:17.712 [2024-11-06 14:06:03.952604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.712 [2024-11-06 14:06:03.952614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5100, cid 0, qid 0 00:24:17.712 [2024-11-06 14:06:03.952789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.712 [2024-11-06 14:06:03.952800] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.712 [2024-11-06 14:06:03.952803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.952807] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5100) on tqpair=0x1963550 00:24:17.712 [2024-11-06 14:06:03.952812] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:17.712 [2024-11-06 14:06:03.952817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:17.712 [2024-11-06 14:06:03.952826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:17.712 [2024-11-06 14:06:03.952934] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:17.712 [2024-11-06 14:06:03.952940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:17.712 [2024-11-06 14:06:03.952949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.952952] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.952956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1963550) 00:24:17.712 [2024-11-06 14:06:03.952963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.712 [2024-11-06 14:06:03.952973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5100, cid 0, qid 0 00:24:17.712 [2024-11-06 14:06:03.953205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.712 [2024-11-06 14:06:03.953211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.712 [2024-11-06 14:06:03.953214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.712 [2024-11-06 14:06:03.953218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5100) on tqpair=0x1963550 00:24:17.713 [2024-11-06 14:06:03.953223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:17.713 [2024-11-06 14:06:03.953232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.953236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.953240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1963550) 00:24:17.713 [2024-11-06 14:06:03.953247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.713 [2024-11-06 14:06:03.953257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5100, cid 0, qid 0 00:24:17.713 [2024-11-06 14:06:03.953428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.713 [2024-11-06 14:06:03.953434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.713 [2024-11-06 14:06:03.953438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.953441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5100) on tqpair=0x1963550 00:24:17.713 [2024-11-06 14:06:03.953446] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:17.713 [2024-11-06 14:06:03.953451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:17.713 [2024-11-06 14:06:03.953459] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:17.713 [2024-11-06 14:06:03.953472] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:17.713 [2024-11-06 14:06:03.953481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.953490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1963550) 00:24:17.713 [2024-11-06 14:06:03.953497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.713 [2024-11-06 14:06:03.953507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5100, cid 0, qid 0 00:24:17.713 [2024-11-06 14:06:03.953714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.713 [2024-11-06 14:06:03.953720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.713 [2024-11-06 14:06:03.953724] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.953728] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1963550): datao=0, datal=4096, cccid=0 00:24:17.713 [2024-11-06 14:06:03.953733] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19c5100) on tqpair(0x1963550): expected_datao=0, payload_size=4096 00:24:17.713 [2024-11-06 14:06:03.953737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.953764] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.953769] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.953889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.713 [2024-11-06 14:06:03.953895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.713 [2024-11-06 14:06:03.953899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.953902] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5100) on tqpair=0x1963550 00:24:17.713 [2024-11-06 14:06:03.953911] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:17.713 [2024-11-06 14:06:03.953916] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:17.713 [2024-11-06 14:06:03.953921] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:17.713 [2024-11-06 14:06:03.953928] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:17.713 [2024-11-06 14:06:03.953933] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:17.713 [2024-11-06 14:06:03.953938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:17.713 [2024-11-06 14:06:03.953948] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:17.713 [2024-11-06 14:06:03.953955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.953959] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.953963] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1963550) 00:24:17.713 [2024-11-06 14:06:03.953970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:17.713 [2024-11-06 14:06:03.953981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5100, cid 0, qid 0 00:24:17.713 [2024-11-06 14:06:03.954180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.713 [2024-11-06 14:06:03.954188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.713 [2024-11-06 14:06:03.954191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.954195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5100) on tqpair=0x1963550 00:24:17.713 [2024-11-06 14:06:03.954202] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.954206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.954209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1963550) 00:24:17.713 [2024-11-06 14:06:03.954218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.713 [2024-11-06 14:06:03.954224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.954228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.954231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1963550) 00:24:17.713 [2024-11-06 14:06:03.954237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.713 [2024-11-06 14:06:03.954243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.954247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.954251] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1963550) 00:24:17.713 [2024-11-06 14:06:03.954256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.713 [2024-11-06 14:06:03.954262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.954266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.954270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.713 [2024-11-06 14:06:03.954275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.713 [2024-11-06 14:06:03.954280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:17.713 [2024-11-06 14:06:03.954289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:17.713 [2024-11-06 14:06:03.954295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.954299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1963550) 00:24:17.713 [2024-11-06 14:06:03.954306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.713 [2024-11-06 14:06:03.954318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5100, cid 0, qid 0 00:24:17.713 [2024-11-06 14:06:03.954323] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5280, cid 1, qid 0 00:24:17.713 [2024-11-06 14:06:03.954328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5400, cid 2, qid 0 00:24:17.713 [2024-11-06 14:06:03.954333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.713 [2024-11-06 14:06:03.954338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5700, cid 4, qid 0 00:24:17.713 [2024-11-06 14:06:03.954513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.713 [2024-11-06 14:06:03.954520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.713 [2024-11-06 14:06:03.954524] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.954528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5700) on tqpair=0x1963550 00:24:17.713 [2024-11-06 14:06:03.954535] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:17.713 [2024-11-06 14:06:03.954540] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:17.713 [2024-11-06 14:06:03.954549] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:17.713 [2024-11-06 14:06:03.954556] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:17.713 [2024-11-06 14:06:03.954563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.954568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.713 [2024-11-06 14:06:03.954572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1963550) 00:24:17.713 [2024-11-06 14:06:03.954579] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:17.713 [2024-11-06 14:06:03.954589] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5700, cid 4, qid 0 00:24:17.713 [2024-11-06 14:06:03.954699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.713 [2024-11-06 14:06:03.954705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.713 [2024-11-06 14:06:03.954709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.714 [2024-11-06 14:06:03.954712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5700) on tqpair=0x1963550 00:24:17.714 [2024-11-06 14:06:03.954788] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:17.714 [2024-11-06 14:06:03.954798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:17.714 [2024-11-06 14:06:03.954806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.714 [2024-11-06 14:06:03.954810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1963550) 00:24:17.714 [2024-11-06 14:06:03.954816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.714 [2024-11-06 14:06:03.954827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5700, cid 4, qid 0 00:24:17.714 [2024-11-06 14:06:03.954992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.714 [2024-11-06 14:06:03.954999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.714 [2024-11-06 14:06:03.955002] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.714 [2024-11-06 14:06:03.955006] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1963550): datao=0, datal=4096, cccid=4 00:24:17.714 [2024-11-06 14:06:03.955010] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19c5700) on tqpair(0x1963550): expected_datao=0, payload_size=4096 00:24:17.714 [2024-11-06 14:06:03.955015] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.714 [2024-11-06 14:06:03.955027] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.714 [2024-11-06 14:06:03.955030] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:03.999757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.978 [2024-11-06 14:06:03.999770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.978 [2024-11-06 14:06:03.999774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:03.999778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5700) on tqpair=0x1963550 00:24:17.978 [2024-11-06 14:06:03.999791] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:17.978 [2024-11-06 14:06:03.999803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:17.978 [2024-11-06 14:06:03.999813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:17.978 [2024-11-06 14:06:03.999821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:03.999824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1963550) 00:24:17.978 [2024-11-06 14:06:03.999832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.978 [2024-11-06 14:06:03.999844] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5700, cid 4, qid 0 00:24:17.978 [2024-11-06 14:06:04.000007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.978 [2024-11-06 14:06:04.000016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.978 [2024-11-06 14:06:04.000020] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.000024] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1963550): datao=0, datal=4096, cccid=4 00:24:17.978 [2024-11-06 14:06:04.000028] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19c5700) on tqpair(0x1963550): expected_datao=0, payload_size=4096 00:24:17.978 [2024-11-06 14:06:04.000033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.000040] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.000043] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.000211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.978 [2024-11-06 14:06:04.000218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.978 [2024-11-06 14:06:04.000221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.000225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5700) on tqpair=0x1963550 00:24:17.978 [2024-11-06 14:06:04.000240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:17.978 [2024-11-06 14:06:04.000251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:17.978 [2024-11-06 14:06:04.000258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.000262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1963550) 00:24:17.978 [2024-11-06 14:06:04.000268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.978 [2024-11-06 14:06:04.000279] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5700, cid 4, qid 0 00:24:17.978 [2024-11-06 14:06:04.000557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.978 [2024-11-06 14:06:04.000563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.978 [2024-11-06 14:06:04.000567] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.000570] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1963550): datao=0, datal=4096, cccid=4 00:24:17.978 [2024-11-06 14:06:04.000575] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19c5700) on tqpair(0x1963550): expected_datao=0, payload_size=4096 00:24:17.978 [2024-11-06 14:06:04.000579] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.000592] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.000596] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.046762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.978 [2024-11-06 14:06:04.046782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.978 [2024-11-06 14:06:04.046785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.046790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5700) on tqpair=0x1963550 00:24:17.978 [2024-11-06 14:06:04.046803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:17.978 [2024-11-06 14:06:04.046812] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:17.978 [2024-11-06 14:06:04.046826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:17.978 [2024-11-06 14:06:04.046832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:17.978 [2024-11-06 14:06:04.046841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:17.978 [2024-11-06 14:06:04.046847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:17.978 [2024-11-06 14:06:04.046853] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:17.978 [2024-11-06 14:06:04.046857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:17.978 [2024-11-06 14:06:04.046863] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:17.978 [2024-11-06 14:06:04.046881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.046885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1963550) 00:24:17.978 [2024-11-06 14:06:04.046894] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.978 [2024-11-06 14:06:04.046902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.046906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.046909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1963550) 00:24:17.978 [2024-11-06 14:06:04.046916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.978 [2024-11-06 14:06:04.046933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5700, cid 4, qid 0 00:24:17.978 [2024-11-06 14:06:04.046939] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5880, cid 5, qid 0 00:24:17.978 [2024-11-06 14:06:04.047065] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.978 [2024-11-06 14:06:04.047072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.978 [2024-11-06 14:06:04.047075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.047079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5700) on tqpair=0x1963550 00:24:17.978 [2024-11-06 14:06:04.047087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.978 [2024-11-06 14:06:04.047093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.978 [2024-11-06 14:06:04.047096] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.047100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5880) on tqpair=0x1963550 00:24:17.978 [2024-11-06 14:06:04.047109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.047113] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1963550) 00:24:17.978 [2024-11-06 14:06:04.047119] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.978 [2024-11-06 14:06:04.047130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5880, cid 5, qid 0 00:24:17.978 [2024-11-06 14:06:04.047288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.978 [2024-11-06 14:06:04.047294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.978 [2024-11-06 14:06:04.047298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.047302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5880) on tqpair=0x1963550 00:24:17.978 [2024-11-06 14:06:04.047311] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.047315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1963550) 00:24:17.978 [2024-11-06 14:06:04.047322] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.978 [2024-11-06 14:06:04.047331] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5880, cid 5, qid 0 00:24:17.978 [2024-11-06 14:06:04.047608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.978 [2024-11-06 14:06:04.047614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.978 [2024-11-06 14:06:04.047618] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.047622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5880) on tqpair=0x1963550 00:24:17.978 [2024-11-06 14:06:04.047631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.978 [2024-11-06 14:06:04.047635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1963550) 00:24:17.978 [2024-11-06 14:06:04.047641] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.979 [2024-11-06 14:06:04.047651] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5880, cid 5, qid 0 00:24:17.979 [2024-11-06 14:06:04.047854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.979 [2024-11-06 14:06:04.047861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.979 [2024-11-06 14:06:04.047865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.047868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5880) on tqpair=0x1963550 00:24:17.979 [2024-11-06 14:06:04.047886] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.047890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1963550) 00:24:17.979 [2024-11-06 14:06:04.047897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.979 [2024-11-06 14:06:04.047904] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.047908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1963550) 00:24:17.979 [2024-11-06 14:06:04.047915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.979 [2024-11-06 14:06:04.047922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.047926] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1963550) 00:24:17.979 [2024-11-06 14:06:04.047932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.979 [2024-11-06 14:06:04.047940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.047944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1963550) 00:24:17.979 [2024-11-06 14:06:04.047950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.979 [2024-11-06 14:06:04.047962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5880, cid 5, qid 0 00:24:17.979 [2024-11-06 14:06:04.047967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5700, cid 4, qid 0 00:24:17.979 [2024-11-06 14:06:04.047972] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5a00, cid 6, qid 0 00:24:17.979 [2024-11-06 14:06:04.047976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5b80, cid 7, qid 0 00:24:17.979 [2024-11-06 14:06:04.048194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.979 [2024-11-06 14:06:04.048202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.979 [2024-11-06 14:06:04.048206] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.048210] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1963550): datao=0, datal=8192, cccid=5 00:24:17.979 [2024-11-06 14:06:04.048214] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19c5880) on tqpair(0x1963550): expected_datao=0, payload_size=8192 00:24:17.979 [2024-11-06 14:06:04.048225] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.048320] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.048324] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.048330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.979 [2024-11-06 14:06:04.048336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.979 [2024-11-06 14:06:04.048339] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.048343] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1963550): datao=0, datal=512, cccid=4 00:24:17.979 [2024-11-06 14:06:04.048347] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19c5700) on tqpair(0x1963550): expected_datao=0, payload_size=512 00:24:17.979 [2024-11-06 14:06:04.048352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.048358] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.048362] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.048367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.979 [2024-11-06 14:06:04.048373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.979 [2024-11-06 14:06:04.048376] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.048380] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1963550): datao=0, datal=512, cccid=6 00:24:17.979 [2024-11-06 14:06:04.048384] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19c5a00) on tqpair(0x1963550): expected_datao=0, payload_size=512 00:24:17.979 [2024-11-06 14:06:04.048389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.048395] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.048398] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.048404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.979 [2024-11-06 14:06:04.048410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.979 [2024-11-06 14:06:04.048413] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.048417] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1963550): datao=0, datal=4096, cccid=7 00:24:17.979 [2024-11-06 14:06:04.048421] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19c5b80) on tqpair(0x1963550): expected_datao=0, payload_size=4096 00:24:17.979 [2024-11-06 14:06:04.048425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.048432] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.048436] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.089981] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.979 [2024-11-06 14:06:04.089992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.979 [2024-11-06 14:06:04.089996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.090000] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5880) on tqpair=0x1963550 00:24:17.979 [2024-11-06 14:06:04.090015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.979 [2024-11-06 14:06:04.090021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.979 [2024-11-06 14:06:04.090025] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.090029] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5700) on tqpair=0x1963550 00:24:17.979 [2024-11-06 14:06:04.090040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.979 [2024-11-06 14:06:04.090045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.979 [2024-11-06 14:06:04.090049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.090053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5a00) on tqpair=0x1963550 00:24:17.979 [2024-11-06 14:06:04.090062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.979 [2024-11-06 14:06:04.090068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.979 [2024-11-06 14:06:04.090072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.979 [2024-11-06 14:06:04.090075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5b80) on tqpair=0x1963550 00:24:17.979 ===================================================== 00:24:17.979 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:17.979 ===================================================== 00:24:17.979 Controller Capabilities/Features 00:24:17.979 ================================ 00:24:17.979 Vendor ID: 8086 00:24:17.979 Subsystem Vendor ID: 8086 00:24:17.979 Serial Number: SPDK00000000000001 00:24:17.979 Model Number: SPDK bdev Controller 00:24:17.979 Firmware Version: 25.01 00:24:17.979 Recommended Arb Burst: 6 00:24:17.979 IEEE OUI Identifier: e4 d2 5c 00:24:17.979 Multi-path I/O 00:24:17.979 May have multiple subsystem ports: Yes 00:24:17.979 May have multiple controllers: Yes 00:24:17.979 Associated with SR-IOV VF: No 00:24:17.979 Max Data Transfer Size: 131072 00:24:17.979 Max Number of Namespaces: 32 00:24:17.979 Max Number of I/O Queues: 127 00:24:17.979 NVMe Specification Version (VS): 1.3 00:24:17.979 NVMe Specification Version (Identify): 1.3 00:24:17.979 Maximum Queue Entries: 128 00:24:17.979 Contiguous Queues Required: Yes 00:24:17.979 Arbitration Mechanisms Supported 00:24:17.979 Weighted Round Robin: Not Supported 00:24:17.979 Vendor Specific: Not Supported 00:24:17.979 Reset Timeout: 15000 ms 00:24:17.979 Doorbell Stride: 4 bytes 00:24:17.979 NVM Subsystem Reset: Not Supported 00:24:17.979 Command Sets Supported 00:24:17.979 NVM Command Set: Supported 00:24:17.979 Boot Partition: Not Supported 00:24:17.979 Memory Page Size Minimum: 4096 bytes 00:24:17.979 Memory Page Size Maximum: 4096 bytes 00:24:17.979 Persistent Memory Region: Not Supported 00:24:17.979 Optional Asynchronous Events Supported 00:24:17.979 Namespace Attribute Notices: Supported 00:24:17.979 Firmware Activation Notices: Not Supported 00:24:17.979 ANA Change Notices: Not Supported 00:24:17.979 PLE Aggregate Log Change Notices: Not Supported 00:24:17.979 LBA Status Info Alert Notices: Not Supported 00:24:17.979 EGE Aggregate Log Change Notices: Not Supported 00:24:17.979 Normal NVM Subsystem Shutdown event: Not Supported 00:24:17.979 Zone Descriptor Change Notices: Not Supported 00:24:17.979 Discovery Log Change Notices: Not Supported 00:24:17.979 Controller Attributes 00:24:17.979 128-bit Host Identifier: Supported 00:24:17.979 Non-Operational Permissive Mode: Not Supported 00:24:17.979 NVM Sets: Not Supported 00:24:17.979 Read Recovery Levels: Not Supported 00:24:17.980 Endurance Groups: Not Supported 00:24:17.980 Predictable Latency Mode: Not Supported 00:24:17.980 Traffic Based Keep ALive: Not Supported 00:24:17.980 Namespace Granularity: Not Supported 00:24:17.980 SQ Associations: Not Supported 00:24:17.980 UUID List: Not Supported 00:24:17.980 Multi-Domain Subsystem: Not Supported 00:24:17.980 Fixed Capacity Management: Not Supported 00:24:17.980 Variable Capacity Management: Not Supported 00:24:17.980 Delete Endurance Group: Not Supported 00:24:17.980 Delete NVM Set: Not Supported 00:24:17.980 Extended LBA Formats Supported: Not Supported 00:24:17.980 Flexible Data Placement Supported: Not Supported 00:24:17.980 00:24:17.980 Controller Memory Buffer Support 00:24:17.980 ================================ 00:24:17.980 Supported: No 00:24:17.980 00:24:17.980 Persistent Memory Region Support 00:24:17.980 ================================ 00:24:17.980 Supported: No 00:24:17.980 00:24:17.980 Admin Command Set Attributes 00:24:17.980 ============================ 00:24:17.980 Security Send/Receive: Not Supported 00:24:17.980 Format NVM: Not Supported 00:24:17.980 Firmware Activate/Download: Not Supported 00:24:17.980 Namespace Management: Not Supported 00:24:17.980 Device Self-Test: Not Supported 00:24:17.980 Directives: Not Supported 00:24:17.980 NVMe-MI: Not Supported 00:24:17.980 Virtualization Management: Not Supported 00:24:17.980 Doorbell Buffer Config: Not Supported 00:24:17.980 Get LBA Status Capability: Not Supported 00:24:17.980 Command & Feature Lockdown Capability: Not Supported 00:24:17.980 Abort Command Limit: 4 00:24:17.980 Async Event Request Limit: 4 00:24:17.980 Number of Firmware Slots: N/A 00:24:17.980 Firmware Slot 1 Read-Only: N/A 00:24:17.980 Firmware Activation Without Reset: N/A 00:24:17.980 Multiple Update Detection Support: N/A 00:24:17.980 Firmware Update Granularity: No Information Provided 00:24:17.980 Per-Namespace SMART Log: No 00:24:17.980 Asymmetric Namespace Access Log Page: Not Supported 00:24:17.980 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:17.980 Command Effects Log Page: Supported 00:24:17.980 Get Log Page Extended Data: Supported 00:24:17.980 Telemetry Log Pages: Not Supported 00:24:17.980 Persistent Event Log Pages: Not Supported 00:24:17.980 Supported Log Pages Log Page: May Support 00:24:17.980 Commands Supported & Effects Log Page: Not Supported 00:24:17.980 Feature Identifiers & Effects Log Page:May Support 00:24:17.980 NVMe-MI Commands & Effects Log Page: May Support 00:24:17.980 Data Area 4 for Telemetry Log: Not Supported 00:24:17.980 Error Log Page Entries Supported: 128 00:24:17.980 Keep Alive: Supported 00:24:17.980 Keep Alive Granularity: 10000 ms 00:24:17.980 00:24:17.980 NVM Command Set Attributes 00:24:17.980 ========================== 00:24:17.980 Submission Queue Entry Size 00:24:17.980 Max: 64 00:24:17.980 Min: 64 00:24:17.980 Completion Queue Entry Size 00:24:17.980 Max: 16 00:24:17.980 Min: 16 00:24:17.980 Number of Namespaces: 32 00:24:17.980 Compare Command: Supported 00:24:17.980 Write Uncorrectable Command: Not Supported 00:24:17.980 Dataset Management Command: Supported 00:24:17.980 Write Zeroes Command: Supported 00:24:17.980 Set Features Save Field: Not Supported 00:24:17.980 Reservations: Supported 00:24:17.980 Timestamp: Not Supported 00:24:17.980 Copy: Supported 00:24:17.980 Volatile Write Cache: Present 00:24:17.980 Atomic Write Unit (Normal): 1 00:24:17.980 Atomic Write Unit (PFail): 1 00:24:17.980 Atomic Compare & Write Unit: 1 00:24:17.980 Fused Compare & Write: Supported 00:24:17.980 Scatter-Gather List 00:24:17.980 SGL Command Set: Supported 00:24:17.980 SGL Keyed: Supported 00:24:17.980 SGL Bit Bucket Descriptor: Not Supported 00:24:17.980 SGL Metadata Pointer: Not Supported 00:24:17.980 Oversized SGL: Not Supported 00:24:17.980 SGL Metadata Address: Not Supported 00:24:17.980 SGL Offset: Supported 00:24:17.980 Transport SGL Data Block: Not Supported 00:24:17.980 Replay Protected Memory Block: Not Supported 00:24:17.980 00:24:17.980 Firmware Slot Information 00:24:17.980 ========================= 00:24:17.980 Active slot: 1 00:24:17.980 Slot 1 Firmware Revision: 25.01 00:24:17.980 00:24:17.980 00:24:17.980 Commands Supported and Effects 00:24:17.980 ============================== 00:24:17.980 Admin Commands 00:24:17.980 -------------- 00:24:17.980 Get Log Page (02h): Supported 00:24:17.980 Identify (06h): Supported 00:24:17.980 Abort (08h): Supported 00:24:17.980 Set Features (09h): Supported 00:24:17.980 Get Features (0Ah): Supported 00:24:17.980 Asynchronous Event Request (0Ch): Supported 00:24:17.980 Keep Alive (18h): Supported 00:24:17.980 I/O Commands 00:24:17.980 ------------ 00:24:17.980 Flush (00h): Supported LBA-Change 00:24:17.980 Write (01h): Supported LBA-Change 00:24:17.980 Read (02h): Supported 00:24:17.980 Compare (05h): Supported 00:24:17.980 Write Zeroes (08h): Supported LBA-Change 00:24:17.980 Dataset Management (09h): Supported LBA-Change 00:24:17.980 Copy (19h): Supported LBA-Change 00:24:17.980 00:24:17.980 Error Log 00:24:17.980 ========= 00:24:17.980 00:24:17.980 Arbitration 00:24:17.980 =========== 00:24:17.980 Arbitration Burst: 1 00:24:17.980 00:24:17.980 Power Management 00:24:17.980 ================ 00:24:17.980 Number of Power States: 1 00:24:17.980 Current Power State: Power State #0 00:24:17.980 Power State #0: 00:24:17.980 Max Power: 0.00 W 00:24:17.980 Non-Operational State: Operational 00:24:17.980 Entry Latency: Not Reported 00:24:17.980 Exit Latency: Not Reported 00:24:17.980 Relative Read Throughput: 0 00:24:17.980 Relative Read Latency: 0 00:24:17.980 Relative Write Throughput: 0 00:24:17.980 Relative Write Latency: 0 00:24:17.980 Idle Power: Not Reported 00:24:17.980 Active Power: Not Reported 00:24:17.980 Non-Operational Permissive Mode: Not Supported 00:24:17.980 00:24:17.980 Health Information 00:24:17.980 ================== 00:24:17.980 Critical Warnings: 00:24:17.980 Available Spare Space: OK 00:24:17.980 Temperature: OK 00:24:17.980 Device Reliability: OK 00:24:17.980 Read Only: No 00:24:17.980 Volatile Memory Backup: OK 00:24:17.980 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:17.980 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:17.980 Available Spare: 0% 00:24:17.980 Available Spare Threshold: 0% 00:24:17.980 Life Percentage Used:[2024-11-06 14:06:04.090179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.980 [2024-11-06 14:06:04.090185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1963550) 00:24:17.980 [2024-11-06 14:06:04.090193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.980 [2024-11-06 14:06:04.090207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5b80, cid 7, qid 0 00:24:17.980 [2024-11-06 14:06:04.090452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.980 [2024-11-06 14:06:04.090459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.980 [2024-11-06 14:06:04.090462] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.980 [2024-11-06 14:06:04.090466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5b80) on tqpair=0x1963550 00:24:17.981 [2024-11-06 14:06:04.090502] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:17.981 [2024-11-06 14:06:04.090512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5100) on tqpair=0x1963550 00:24:17.981 [2024-11-06 14:06:04.090518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.981 [2024-11-06 14:06:04.090523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5280) on tqpair=0x1963550 00:24:17.981 [2024-11-06 14:06:04.090528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.981 [2024-11-06 14:06:04.090533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5400) on tqpair=0x1963550 00:24:17.981 [2024-11-06 14:06:04.090538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.981 [2024-11-06 14:06:04.090543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.981 [2024-11-06 14:06:04.090548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.981 [2024-11-06 14:06:04.090556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.090560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.090564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.981 [2024-11-06 14:06:04.090571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.981 [2024-11-06 14:06:04.090583] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.981 [2024-11-06 14:06:04.090705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.981 [2024-11-06 14:06:04.090712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.981 [2024-11-06 14:06:04.090715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.090719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.981 [2024-11-06 14:06:04.090726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.090730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.090733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.981 [2024-11-06 14:06:04.090740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.981 [2024-11-06 14:06:04.094769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.981 [2024-11-06 14:06:04.094983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.981 [2024-11-06 14:06:04.094991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.981 [2024-11-06 14:06:04.094994] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.094998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.981 [2024-11-06 14:06:04.095003] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:17.981 [2024-11-06 14:06:04.095008] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:17.981 [2024-11-06 14:06:04.095017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.095021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.095025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.981 [2024-11-06 14:06:04.095032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.981 [2024-11-06 14:06:04.095042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.981 [2024-11-06 14:06:04.095202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.981 [2024-11-06 14:06:04.095208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.981 [2024-11-06 14:06:04.095212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.095215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.981 [2024-11-06 14:06:04.095226] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.095230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.095233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.981 [2024-11-06 14:06:04.095240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.981 [2024-11-06 14:06:04.095250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.981 [2024-11-06 14:06:04.095442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.981 [2024-11-06 14:06:04.095448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.981 [2024-11-06 14:06:04.095452] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.095455] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.981 [2024-11-06 14:06:04.095465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.095470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.095473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.981 [2024-11-06 14:06:04.095480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.981 [2024-11-06 14:06:04.095490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.981 [2024-11-06 14:06:04.095732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.981 [2024-11-06 14:06:04.095739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.981 [2024-11-06 14:06:04.095742] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.095752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.981 [2024-11-06 14:06:04.095762] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.095766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.095769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.981 [2024-11-06 14:06:04.095779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.981 [2024-11-06 14:06:04.095790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.981 [2024-11-06 14:06:04.095968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.981 [2024-11-06 14:06:04.095974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.981 [2024-11-06 14:06:04.095977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.095981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.981 [2024-11-06 14:06:04.095991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.095995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.095998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.981 [2024-11-06 14:06:04.096005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.981 [2024-11-06 14:06:04.096015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.981 [2024-11-06 14:06:04.096234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.981 [2024-11-06 14:06:04.096240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.981 [2024-11-06 14:06:04.096244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.096248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.981 [2024-11-06 14:06:04.096257] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.096261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.096265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.981 [2024-11-06 14:06:04.096271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.981 [2024-11-06 14:06:04.096281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.981 [2024-11-06 14:06:04.096473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.981 [2024-11-06 14:06:04.096479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.981 [2024-11-06 14:06:04.096482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.096486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.981 [2024-11-06 14:06:04.096496] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.096500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.096503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.981 [2024-11-06 14:06:04.096510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.981 [2024-11-06 14:06:04.096520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.981 [2024-11-06 14:06:04.096697] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.981 [2024-11-06 14:06:04.096703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.981 [2024-11-06 14:06:04.096707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.096711] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.981 [2024-11-06 14:06:04.096721] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.096724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.096728] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.981 [2024-11-06 14:06:04.096735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.981 [2024-11-06 14:06:04.096752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.981 [2024-11-06 14:06:04.096930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.981 [2024-11-06 14:06:04.096938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.981 [2024-11-06 14:06:04.096942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.096945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.981 [2024-11-06 14:06:04.096955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.096959] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.981 [2024-11-06 14:06:04.096963] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.981 [2024-11-06 14:06:04.096969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.982 [2024-11-06 14:06:04.096979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.982 [2024-11-06 14:06:04.097230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.982 [2024-11-06 14:06:04.097236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.982 [2024-11-06 14:06:04.097240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.097243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.982 [2024-11-06 14:06:04.097253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.097257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.097261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.982 [2024-11-06 14:06:04.097267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.982 [2024-11-06 14:06:04.097277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.982 [2024-11-06 14:06:04.097471] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.982 [2024-11-06 14:06:04.097478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.982 [2024-11-06 14:06:04.097481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.097485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.982 [2024-11-06 14:06:04.097495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.097499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.097502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.982 [2024-11-06 14:06:04.097509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.982 [2024-11-06 14:06:04.097519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.982 [2024-11-06 14:06:04.097719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.982 [2024-11-06 14:06:04.097725] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.982 [2024-11-06 14:06:04.097729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.097732] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.982 [2024-11-06 14:06:04.097742] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.097753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.097757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.982 [2024-11-06 14:06:04.097764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.982 [2024-11-06 14:06:04.097776] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.982 [2024-11-06 14:06:04.097964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.982 [2024-11-06 14:06:04.097970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.982 [2024-11-06 14:06:04.097974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.097977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.982 [2024-11-06 14:06:04.097987] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.097991] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.097995] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.982 [2024-11-06 14:06:04.098001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.982 [2024-11-06 14:06:04.098011] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.982 [2024-11-06 14:06:04.098182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.982 [2024-11-06 14:06:04.098188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.982 [2024-11-06 14:06:04.098191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.098195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.982 [2024-11-06 14:06:04.098205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.098208] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.098212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.982 [2024-11-06 14:06:04.098219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.982 [2024-11-06 14:06:04.098228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.982 [2024-11-06 14:06:04.098418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.982 [2024-11-06 14:06:04.098425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.982 [2024-11-06 14:06:04.098428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.098432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.982 [2024-11-06 14:06:04.098442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.098446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.098449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.982 [2024-11-06 14:06:04.098456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.982 [2024-11-06 14:06:04.098466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.982 [2024-11-06 14:06:04.098724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.982 [2024-11-06 14:06:04.098730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.982 [2024-11-06 14:06:04.098734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.098737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.982 [2024-11-06 14:06:04.102755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.102762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.102766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1963550) 00:24:17.982 [2024-11-06 14:06:04.102773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.982 [2024-11-06 14:06:04.102784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19c5580, cid 3, qid 0 00:24:17.982 [2024-11-06 14:06:04.103020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.982 [2024-11-06 14:06:04.103027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.982 [2024-11-06 14:06:04.103030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.982 [2024-11-06 14:06:04.103034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19c5580) on tqpair=0x1963550 00:24:17.982 [2024-11-06 14:06:04.103042] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 8 milliseconds 00:24:17.982 0% 00:24:17.982 Data Units Read: 0 00:24:17.982 Data Units Written: 0 00:24:17.982 Host Read Commands: 0 00:24:17.982 Host Write Commands: 0 00:24:17.982 Controller Busy Time: 0 minutes 00:24:17.982 Power Cycles: 0 00:24:17.982 Power On Hours: 0 hours 00:24:17.982 Unsafe Shutdowns: 0 00:24:17.982 Unrecoverable Media Errors: 0 00:24:17.982 Lifetime Error Log Entries: 0 00:24:17.982 Warning Temperature Time: 0 minutes 00:24:17.982 Critical Temperature Time: 0 minutes 00:24:17.982 00:24:17.982 Number of Queues 00:24:17.982 ================ 00:24:17.982 Number of I/O Submission Queues: 127 00:24:17.982 Number of I/O Completion Queues: 127 00:24:17.982 00:24:17.982 Active Namespaces 00:24:17.982 ================= 00:24:17.982 Namespace ID:1 00:24:17.982 Error Recovery Timeout: Unlimited 00:24:17.982 Command Set Identifier: NVM (00h) 00:24:17.982 Deallocate: Supported 00:24:17.982 Deallocated/Unwritten Error: Not Supported 00:24:17.982 Deallocated Read Value: Unknown 00:24:17.982 Deallocate in Write Zeroes: Not Supported 00:24:17.982 Deallocated Guard Field: 0xFFFF 00:24:17.982 Flush: Supported 00:24:17.982 Reservation: Supported 00:24:17.982 Namespace Sharing Capabilities: Multiple Controllers 00:24:17.982 Size (in LBAs): 131072 (0GiB) 00:24:17.983 Capacity (in LBAs): 131072 (0GiB) 00:24:17.983 Utilization (in LBAs): 131072 (0GiB) 00:24:17.983 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:17.983 EUI64: ABCDEF0123456789 00:24:17.983 UUID: 972752f9-ab7a-4056-a5be-12c8f58e0cc5 00:24:17.983 Thin Provisioning: Not Supported 00:24:17.983 Per-NS Atomic Units: Yes 00:24:17.983 Atomic Boundary Size (Normal): 0 00:24:17.983 Atomic Boundary Size (PFail): 0 00:24:17.983 Atomic Boundary Offset: 0 00:24:17.983 Maximum Single Source Range Length: 65535 00:24:17.983 Maximum Copy Length: 65535 00:24:17.983 Maximum Source Range Count: 1 00:24:17.983 NGUID/EUI64 Never Reused: No 00:24:17.983 Namespace Write Protected: No 00:24:17.983 Number of LBA Formats: 1 00:24:17.983 Current LBA Format: LBA Format #00 00:24:17.983 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:17.983 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:17.983 rmmod nvme_tcp 00:24:17.983 rmmod nvme_fabrics 00:24:17.983 rmmod nvme_keyring 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2504684 ']' 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2504684 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 2504684 ']' 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 2504684 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:17.983 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2504684 00:24:18.243 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:18.243 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:18.243 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2504684' 00:24:18.243 killing process with pid 2504684 00:24:18.243 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 2504684 00:24:18.243 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 2504684 00:24:18.243 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:18.243 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:18.243 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:18.243 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:18.243 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:18.243 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:18.243 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:18.243 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:18.243 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:18.243 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.243 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.243 14:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.785 14:06:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:20.785 00:24:20.785 real 0m11.927s 00:24:20.785 user 0m9.069s 00:24:20.785 sys 0m6.282s 00:24:20.785 14:06:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.786 ************************************ 00:24:20.786 END TEST nvmf_identify 00:24:20.786 ************************************ 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.786 ************************************ 00:24:20.786 START TEST nvmf_perf 00:24:20.786 ************************************ 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:20.786 * Looking for test storage... 00:24:20.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:20.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.786 --rc genhtml_branch_coverage=1 00:24:20.786 --rc genhtml_function_coverage=1 00:24:20.786 --rc genhtml_legend=1 00:24:20.786 --rc geninfo_all_blocks=1 00:24:20.786 --rc geninfo_unexecuted_blocks=1 00:24:20.786 00:24:20.786 ' 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:20.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.786 --rc genhtml_branch_coverage=1 00:24:20.786 --rc genhtml_function_coverage=1 00:24:20.786 --rc genhtml_legend=1 00:24:20.786 --rc geninfo_all_blocks=1 00:24:20.786 --rc geninfo_unexecuted_blocks=1 00:24:20.786 00:24:20.786 ' 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:20.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.786 --rc genhtml_branch_coverage=1 00:24:20.786 --rc genhtml_function_coverage=1 00:24:20.786 --rc genhtml_legend=1 00:24:20.786 --rc geninfo_all_blocks=1 00:24:20.786 --rc geninfo_unexecuted_blocks=1 00:24:20.786 00:24:20.786 ' 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:20.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.786 --rc genhtml_branch_coverage=1 00:24:20.786 --rc genhtml_function_coverage=1 00:24:20.786 --rc genhtml_legend=1 00:24:20.786 --rc geninfo_all_blocks=1 00:24:20.786 --rc geninfo_unexecuted_blocks=1 00:24:20.786 00:24:20.786 ' 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:20.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:20.786 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:20.787 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:20.787 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:20.787 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:20.787 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:20.787 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:20.787 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:20.787 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.787 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:20.787 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:20.787 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:20.787 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.787 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.787 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.787 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:20.787 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:20.787 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:20.787 14:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:28.928 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:28.928 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:28.928 Found net devices under 0000:31:00.0: cvl_0_0 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:28.928 Found net devices under 0000:31:00.1: cvl_0_1 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:28.928 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:28.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:24:28.929 00:24:28.929 --- 10.0.0.2 ping statistics --- 00:24:28.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.929 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:24:28.929 00:24:28.929 --- 10.0.0.1 ping statistics --- 00:24:28.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.929 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2509662 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2509662 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 2509662 ']' 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:28.929 14:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.929 [2024-11-06 14:06:14.613542] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:24:28.929 [2024-11-06 14:06:14.613611] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.929 [2024-11-06 14:06:14.715536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.929 [2024-11-06 14:06:14.768988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.929 [2024-11-06 14:06:14.769036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.929 [2024-11-06 14:06:14.769045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.929 [2024-11-06 14:06:14.769053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.929 [2024-11-06 14:06:14.769060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.929 [2024-11-06 14:06:14.771110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.929 [2024-11-06 14:06:14.771271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.929 [2024-11-06 14:06:14.771431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.929 [2024-11-06 14:06:14.771432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.191 14:06:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:29.191 14:06:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:24:29.191 14:06:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:29.191 14:06:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:29.191 14:06:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:29.191 14:06:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.191 14:06:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:29.191 14:06:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:29.762 14:06:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:29.762 14:06:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:30.023 14:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:30.023 14:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:30.284 14:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:30.284 14:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:30.284 14:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:30.284 14:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:30.284 14:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:30.546 [2024-11-06 14:06:16.584567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.546 14:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:30.546 14:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:30.806 14:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:30.807 14:06:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:30.807 14:06:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:31.067 14:06:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:31.328 [2024-11-06 14:06:17.388168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.328 14:06:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:31.589 14:06:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:31.589 14:06:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:31.589 14:06:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:31.589 14:06:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:33.058 Initializing NVMe Controllers 00:24:33.058 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:33.058 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:33.058 Initialization complete. Launching workers. 00:24:33.058 ======================================================== 00:24:33.058 Latency(us) 00:24:33.058 Device Information : IOPS MiB/s Average min max 00:24:33.058 PCIE (0000:65:00.0) NSID 1 from core 0: 78579.16 306.95 406.58 13.38 5247.68 00:24:33.058 ======================================================== 00:24:33.058 Total : 78579.16 306.95 406.58 13.38 5247.68 00:24:33.058 00:24:33.058 14:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:33.996 Initializing NVMe Controllers 00:24:33.996 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:33.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:33.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:33.996 Initialization complete. Launching workers. 00:24:33.996 ======================================================== 00:24:33.996 Latency(us) 00:24:33.996 Device Information : IOPS MiB/s Average min max 00:24:33.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 120.56 0.47 8585.38 109.89 44957.51 00:24:33.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.79 0.22 17747.08 5978.87 48882.77 00:24:33.996 ======================================================== 00:24:33.996 Total : 177.36 0.69 11519.18 109.89 48882.77 00:24:33.996 00:24:34.256 14:06:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:35.637 Initializing NVMe Controllers 00:24:35.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:35.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:35.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:35.637 Initialization complete. Launching workers. 00:24:35.637 ======================================================== 00:24:35.637 Latency(us) 00:24:35.637 Device Information : IOPS MiB/s Average min max 00:24:35.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11920.86 46.57 2687.55 413.85 6156.98 00:24:35.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3851.95 15.05 8355.44 5547.47 17287.03 00:24:35.637 ======================================================== 00:24:35.637 Total : 15772.81 61.61 4071.73 413.85 17287.03 00:24:35.637 00:24:35.637 14:06:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:35.637 14:06:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:35.637 14:06:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:38.178 Initializing NVMe Controllers 00:24:38.178 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:38.178 Controller IO queue size 128, less than required. 00:24:38.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:38.178 Controller IO queue size 128, less than required. 00:24:38.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:38.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:38.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:38.178 Initialization complete. Launching workers. 00:24:38.178 ======================================================== 00:24:38.178 Latency(us) 00:24:38.178 Device Information : IOPS MiB/s Average min max 00:24:38.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1701.41 425.35 76754.59 40366.77 126382.82 00:24:38.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 582.80 145.70 227173.72 67296.09 351154.14 00:24:38.178 ======================================================== 00:24:38.178 Total : 2284.21 571.05 115132.86 40366.77 351154.14 00:24:38.178 00:24:38.178 14:06:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:38.438 No valid NVMe controllers or AIO or URING devices found 00:24:38.438 Initializing NVMe Controllers 00:24:38.438 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:38.438 Controller IO queue size 128, less than required. 00:24:38.438 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:38.438 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:38.438 Controller IO queue size 128, less than required. 00:24:38.438 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:38.438 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:38.438 WARNING: Some requested NVMe devices were skipped 00:24:38.438 14:06:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:40.988 Initializing NVMe Controllers 00:24:40.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:40.988 Controller IO queue size 128, less than required. 00:24:40.988 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.988 Controller IO queue size 128, less than required. 00:24:40.988 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:40.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:40.988 Initialization complete. Launching workers. 00:24:40.988 00:24:40.988 ==================== 00:24:40.988 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:40.988 TCP transport: 00:24:40.988 polls: 39220 00:24:40.988 idle_polls: 23745 00:24:40.988 sock_completions: 15475 00:24:40.988 nvme_completions: 7081 00:24:40.988 submitted_requests: 10592 00:24:40.988 queued_requests: 1 00:24:40.988 00:24:40.988 ==================== 00:24:40.988 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:40.988 TCP transport: 00:24:40.988 polls: 54627 00:24:40.988 idle_polls: 38818 00:24:40.988 sock_completions: 15809 00:24:40.988 nvme_completions: 7461 00:24:40.988 submitted_requests: 11176 00:24:40.988 queued_requests: 1 00:24:40.988 ======================================================== 00:24:40.988 Latency(us) 00:24:40.988 Device Information : IOPS MiB/s Average min max 00:24:40.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1767.07 441.77 74061.13 38312.39 137169.39 00:24:40.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1861.91 465.48 69221.24 29780.45 124963.48 00:24:40.988 ======================================================== 00:24:40.988 Total : 3628.98 907.24 71577.94 29780.45 137169.39 00:24:40.988 00:24:40.988 14:06:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:40.988 14:06:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:40.988 rmmod nvme_tcp 00:24:40.988 rmmod nvme_fabrics 00:24:40.988 rmmod nvme_keyring 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2509662 ']' 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2509662 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 2509662 ']' 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 2509662 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:40.988 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2509662 00:24:41.249 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:41.249 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:41.249 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2509662' 00:24:41.249 killing process with pid 2509662 00:24:41.249 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 2509662 00:24:41.249 14:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 2509662 00:24:43.160 14:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:43.160 14:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:43.160 14:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:43.160 14:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:43.160 14:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:43.160 14:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:43.160 14:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:43.160 14:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:43.160 14:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:43.160 14:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.160 14:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.160 14:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.070 14:06:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:45.070 00:24:45.070 real 0m24.689s 00:24:45.070 user 0m59.430s 00:24:45.070 sys 0m8.796s 00:24:45.070 14:06:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:45.070 14:06:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:45.070 ************************************ 00:24:45.070 END TEST nvmf_perf 00:24:45.070 ************************************ 00:24:45.330 14:06:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:45.330 14:06:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:45.330 14:06:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:45.330 14:06:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.330 ************************************ 00:24:45.330 START TEST nvmf_fio_host 00:24:45.330 ************************************ 00:24:45.330 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:45.330 * Looking for test storage... 00:24:45.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:45.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.331 --rc genhtml_branch_coverage=1 00:24:45.331 --rc genhtml_function_coverage=1 00:24:45.331 --rc genhtml_legend=1 00:24:45.331 --rc geninfo_all_blocks=1 00:24:45.331 --rc geninfo_unexecuted_blocks=1 00:24:45.331 00:24:45.331 ' 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:45.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.331 --rc genhtml_branch_coverage=1 00:24:45.331 --rc genhtml_function_coverage=1 00:24:45.331 --rc genhtml_legend=1 00:24:45.331 --rc geninfo_all_blocks=1 00:24:45.331 --rc geninfo_unexecuted_blocks=1 00:24:45.331 00:24:45.331 ' 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:45.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.331 --rc genhtml_branch_coverage=1 00:24:45.331 --rc genhtml_function_coverage=1 00:24:45.331 --rc genhtml_legend=1 00:24:45.331 --rc geninfo_all_blocks=1 00:24:45.331 --rc geninfo_unexecuted_blocks=1 00:24:45.331 00:24:45.331 ' 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:45.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.331 --rc genhtml_branch_coverage=1 00:24:45.331 --rc genhtml_function_coverage=1 00:24:45.331 --rc genhtml_legend=1 00:24:45.331 --rc geninfo_all_blocks=1 00:24:45.331 --rc geninfo_unexecuted_blocks=1 00:24:45.331 00:24:45.331 ' 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.331 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.592 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:45.592 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.592 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.592 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.592 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.592 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.592 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.592 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:45.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:45.593 14:06:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:53.733 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:53.733 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:53.733 Found net devices under 0000:31:00.0: cvl_0_0 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:53.733 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:53.734 Found net devices under 0000:31:00.1: cvl_0_1 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.734 14:06:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:53.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:24:53.734 00:24:53.734 --- 10.0.0.2 ping statistics --- 00:24:53.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.734 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:24:53.734 00:24:53.734 --- 10.0.0.1 ping statistics --- 00:24:53.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.734 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2516766 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2516766 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 2516766 ']' 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:53.734 14:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.734 [2024-11-06 14:06:39.306628] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:24:53.734 [2024-11-06 14:06:39.306697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.734 [2024-11-06 14:06:39.408244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:53.734 [2024-11-06 14:06:39.460604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.734 [2024-11-06 14:06:39.460660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.734 [2024-11-06 14:06:39.460669] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.734 [2024-11-06 14:06:39.460676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.734 [2024-11-06 14:06:39.460683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.734 [2024-11-06 14:06:39.463130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.734 [2024-11-06 14:06:39.463290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.734 [2024-11-06 14:06:39.463449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.734 [2024-11-06 14:06:39.463449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:53.995 14:06:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:53.995 14:06:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:24:53.995 14:06:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:54.256 [2024-11-06 14:06:40.281479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.256 14:06:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:54.256 14:06:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:54.256 14:06:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.256 14:06:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:54.516 Malloc1 00:24:54.516 14:06:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:54.777 14:06:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:54.777 14:06:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.037 [2024-11-06 14:06:41.156589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.038 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:55.299 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:55.299 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:55.299 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:55.299 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:55.299 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:55.299 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:55.299 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:55.299 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:24:55.299 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:55.299 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:55.299 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:55.299 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:24:55.299 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:55.300 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:55.300 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:55.300 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:55.300 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:55.300 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:55.300 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:55.300 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:55.300 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:55.300 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:55.300 14:06:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:55.561 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:55.561 fio-3.35 00:24:55.561 Starting 1 thread 00:24:58.108 00:24:58.108 test: (groupid=0, jobs=1): err= 0: pid=2517509: Wed Nov 6 14:06:44 2024 00:24:58.108 read: IOPS=13.7k, BW=53.4MiB/s (56.0MB/s)(107MiB/2004msec) 00:24:58.108 slat (usec): min=2, max=282, avg= 2.14, stdev= 2.46 00:24:58.108 clat (usec): min=3680, max=9924, avg=5159.83, stdev=409.16 00:24:58.108 lat (usec): min=3719, max=9930, avg=5161.97, stdev=409.44 00:24:58.108 clat percentiles (usec): 00:24:58.108 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4883], 00:24:58.108 | 30.00th=[ 4948], 40.00th=[ 5080], 50.00th=[ 5145], 60.00th=[ 5211], 00:24:58.108 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5604], 95.00th=[ 5735], 00:24:58.108 | 99.00th=[ 6259], 99.50th=[ 7046], 99.90th=[ 8979], 99.95th=[ 9503], 00:24:58.108 | 99.99th=[ 9634] 00:24:58.108 bw ( KiB/s): min=51376, max=56064, per=99.92%, avg=54626.00, stdev=2186.08, samples=4 00:24:58.108 iops : min=12844, max=14016, avg=13656.50, stdev=546.52, samples=4 00:24:58.108 write: IOPS=13.6k, BW=53.3MiB/s (55.9MB/s)(107MiB/2004msec); 0 zone resets 00:24:58.108 slat (usec): min=2, max=270, avg= 2.21, stdev= 1.81 00:24:58.108 clat (usec): min=2900, max=8237, avg=4159.63, stdev=347.56 00:24:58.108 lat (usec): min=2918, max=8316, avg=4161.84, stdev=347.92 00:24:58.108 clat percentiles (usec): 00:24:58.108 | 1.00th=[ 3490], 5.00th=[ 3687], 10.00th=[ 3785], 20.00th=[ 3916], 00:24:58.108 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:24:58.108 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4621], 00:24:58.108 | 99.00th=[ 5145], 99.50th=[ 5735], 99.90th=[ 7635], 99.95th=[ 7767], 00:24:58.108 | 99.99th=[ 7963] 00:24:58.108 bw ( KiB/s): min=51720, max=55928, per=100.00%, avg=54594.00, stdev=1940.44, samples=4 00:24:58.108 iops : min=12930, max=13982, avg=13648.50, stdev=485.11, samples=4 00:24:58.108 lat (msec) : 4=14.98%, 10=85.02% 00:24:58.108 cpu : usr=77.68%, sys=21.72%, ctx=28, majf=0, minf=17 00:24:58.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:58.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:58.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:58.109 issued rwts: total=27389,27348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:58.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:58.109 00:24:58.109 Run status group 0 (all jobs): 00:24:58.109 READ: bw=53.4MiB/s (56.0MB/s), 53.4MiB/s-53.4MiB/s (56.0MB/s-56.0MB/s), io=107MiB (112MB), run=2004-2004msec 00:24:58.109 WRITE: bw=53.3MiB/s (55.9MB/s), 53.3MiB/s-53.3MiB/s (55.9MB/s-55.9MB/s), io=107MiB (112MB), run=2004-2004msec 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:58.109 14:06:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:58.370 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:58.370 fio-3.35 00:24:58.370 Starting 1 thread 00:25:00.922 00:25:00.922 test: (groupid=0, jobs=1): err= 0: pid=2518124: Wed Nov 6 14:06:46 2024 00:25:00.922 read: IOPS=9363, BW=146MiB/s (153MB/s)(299MiB/2043msec) 00:25:00.922 slat (usec): min=3, max=114, avg= 3.60, stdev= 1.63 00:25:00.922 clat (usec): min=1490, max=50712, avg=8339.65, stdev=4094.74 00:25:00.922 lat (usec): min=1493, max=50715, avg=8343.25, stdev=4094.81 00:25:00.922 clat percentiles (usec): 00:25:00.922 | 1.00th=[ 4080], 5.00th=[ 5080], 10.00th=[ 5604], 20.00th=[ 6325], 00:25:00.922 | 30.00th=[ 6849], 40.00th=[ 7308], 50.00th=[ 7832], 60.00th=[ 8455], 00:25:00.922 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[11207], 00:25:00.922 | 99.00th=[14484], 99.50th=[46400], 99.90th=[50070], 99.95th=[50594], 00:25:00.922 | 99.99th=[50594] 00:25:00.922 bw ( KiB/s): min=66272, max=86592, per=50.20%, avg=75216.00, stdev=9039.51, samples=4 00:25:00.922 iops : min= 4142, max= 5412, avg=4701.00, stdev=564.97, samples=4 00:25:00.922 write: IOPS=5566, BW=87.0MiB/s (91.2MB/s)(154MiB/1771msec); 0 zone resets 00:25:00.922 slat (usec): min=39, max=448, avg=41.08, stdev= 8.88 00:25:00.922 clat (usec): min=1598, max=50330, avg=9363.35, stdev=3894.80 00:25:00.922 lat (usec): min=1638, max=50370, avg=9404.43, stdev=3895.84 00:25:00.922 clat percentiles (usec): 00:25:00.922 | 1.00th=[ 6325], 5.00th=[ 7111], 10.00th=[ 7439], 20.00th=[ 7832], 00:25:00.922 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9241], 00:25:00.922 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[10945], 95.00th=[11600], 00:25:00.922 | 99.00th=[16450], 99.50th=[47973], 99.90th=[50070], 99.95th=[50070], 00:25:00.922 | 99.99th=[50070] 00:25:00.922 bw ( KiB/s): min=69888, max=89792, per=88.05%, avg=78416.00, stdev=8569.41, samples=4 00:25:00.922 iops : min= 4368, max= 5612, avg=4901.00, stdev=535.59, samples=4 00:25:00.922 lat (msec) : 2=0.04%, 4=0.62%, 10=80.08%, 20=18.38%, 50=0.78% 00:25:00.922 lat (msec) : 100=0.10% 00:25:00.922 cpu : usr=84.77%, sys=14.40%, ctx=13, majf=0, minf=39 00:25:00.922 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:00.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.922 issued rwts: total=19130,9858,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.922 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.922 00:25:00.922 Run status group 0 (all jobs): 00:25:00.922 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=299MiB (313MB), run=2043-2043msec 00:25:00.922 WRITE: bw=87.0MiB/s (91.2MB/s), 87.0MiB/s-87.0MiB/s (91.2MB/s-91.2MB/s), io=154MiB (162MB), run=1771-1771msec 00:25:00.922 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:00.922 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:00.922 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:00.922 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:00.922 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:00.922 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:00.922 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:00.922 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.922 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:00.922 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.922 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:01.183 rmmod nvme_tcp 00:25:01.183 rmmod nvme_fabrics 00:25:01.183 rmmod nvme_keyring 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2516766 ']' 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2516766 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 2516766 ']' 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 2516766 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2516766 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2516766' 00:25:01.184 killing process with pid 2516766 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 2516766 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 2516766 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:01.184 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:01.444 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:01.444 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:01.444 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.444 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.444 14:06:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.356 14:06:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:03.356 00:25:03.356 real 0m18.138s 00:25:03.356 user 1m0.218s 00:25:03.356 sys 0m7.862s 00:25:03.356 14:06:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:03.356 14:06:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.356 ************************************ 00:25:03.356 END TEST nvmf_fio_host 00:25:03.356 ************************************ 00:25:03.356 14:06:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:03.356 14:06:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:03.356 14:06:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:03.356 14:06:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.356 ************************************ 00:25:03.356 START TEST nvmf_failover 00:25:03.356 ************************************ 00:25:03.356 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:03.617 * Looking for test storage... 00:25:03.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:03.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.617 --rc genhtml_branch_coverage=1 00:25:03.617 --rc genhtml_function_coverage=1 00:25:03.617 --rc genhtml_legend=1 00:25:03.617 --rc geninfo_all_blocks=1 00:25:03.617 --rc geninfo_unexecuted_blocks=1 00:25:03.617 00:25:03.617 ' 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:03.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.617 --rc genhtml_branch_coverage=1 00:25:03.617 --rc genhtml_function_coverage=1 00:25:03.617 --rc genhtml_legend=1 00:25:03.617 --rc geninfo_all_blocks=1 00:25:03.617 --rc geninfo_unexecuted_blocks=1 00:25:03.617 00:25:03.617 ' 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:03.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.617 --rc genhtml_branch_coverage=1 00:25:03.617 --rc genhtml_function_coverage=1 00:25:03.617 --rc genhtml_legend=1 00:25:03.617 --rc geninfo_all_blocks=1 00:25:03.617 --rc geninfo_unexecuted_blocks=1 00:25:03.617 00:25:03.617 ' 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:03.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.617 --rc genhtml_branch_coverage=1 00:25:03.617 --rc genhtml_function_coverage=1 00:25:03.617 --rc genhtml_legend=1 00:25:03.617 --rc geninfo_all_blocks=1 00:25:03.617 --rc geninfo_unexecuted_blocks=1 00:25:03.617 00:25:03.617 ' 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.617 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:03.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:03.618 14:06:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:11.756 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:11.756 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:11.756 Found net devices under 0000:31:00.0: cvl_0_0 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:11.756 Found net devices under 0000:31:00.1: cvl_0_1 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:11.756 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:11.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:25:11.757 00:25:11.757 --- 10.0.0.2 ping statistics --- 00:25:11.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.757 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:25:11.757 00:25:11.757 --- 10.0.0.1 ping statistics --- 00:25:11.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.757 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2522821 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2522821 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2522821 ']' 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:11.757 14:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:11.757 [2024-11-06 14:06:57.612788] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:25:11.757 [2024-11-06 14:06:57.612858] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.757 [2024-11-06 14:06:57.715924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:11.757 [2024-11-06 14:06:57.768034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.757 [2024-11-06 14:06:57.768092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.757 [2024-11-06 14:06:57.768100] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.757 [2024-11-06 14:06:57.768107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.757 [2024-11-06 14:06:57.768114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.757 [2024-11-06 14:06:57.769974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.757 [2024-11-06 14:06:57.770133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.757 [2024-11-06 14:06:57.770134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:12.329 14:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:12.329 14:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:12.329 14:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:12.329 14:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:12.329 14:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:12.330 14:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.330 14:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:12.591 [2024-11-06 14:06:58.641392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.591 14:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:12.852 Malloc0 00:25:12.852 14:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:12.852 14:06:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:13.114 14:06:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:13.377 [2024-11-06 14:06:59.465718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.377 14:06:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:13.637 [2024-11-06 14:06:59.662266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:13.637 14:06:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:13.637 [2024-11-06 14:06:59.862993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:13.637 14:06:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2523323 00:25:13.637 14:06:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:13.637 14:06:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:13.637 14:06:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2523323 /var/tmp/bdevperf.sock 00:25:13.637 14:06:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2523323 ']' 00:25:13.637 14:06:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:13.637 14:06:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:13.637 14:06:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:13.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:13.637 14:06:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:13.637 14:06:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.574 14:07:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:14.574 14:07:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:14.574 14:07:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:14.831 NVMe0n1 00:25:14.831 14:07:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:15.399 00:25:15.399 14:07:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2523543 00:25:15.399 14:07:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:15.399 14:07:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:16.337 14:07:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.337 [2024-11-06 14:07:02.583135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.337 [2024-11-06 14:07:02.583324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.338 [2024-11-06 14:07:02.583462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7646d0 is same with the state(6) to be set 00:25:16.597 14:07:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:19.889 14:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:19.889 00:25:19.889 14:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:19.889 [2024-11-06 14:07:06.040196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.889 [2024-11-06 14:07:06.040615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765520 is same with the state(6) to be set 00:25:19.890 14:07:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:23.207 14:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:23.207 [2024-11-06 14:07:09.228605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:23.207 14:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:24.147 14:07:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:24.147 [2024-11-06 14:07:10.420232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.147 [2024-11-06 14:07:10.420417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b07a0 is same with the state(6) to be set 00:25:24.406 14:07:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2523543 00:25:30.994 { 00:25:30.994 "results": [ 00:25:30.994 { 00:25:30.994 "job": "NVMe0n1", 00:25:30.994 "core_mask": "0x1", 00:25:30.994 "workload": "verify", 00:25:30.994 "status": "finished", 00:25:30.994 "verify_range": { 00:25:30.994 "start": 0, 00:25:30.994 "length": 16384 00:25:30.994 }, 00:25:30.994 "queue_depth": 128, 00:25:30.994 "io_size": 4096, 00:25:30.994 "runtime": 15.003817, 00:25:30.994 "iops": 12200.62868002189, 00:25:30.994 "mibps": 47.658705781335506, 00:25:30.994 "io_failed": 20500, 00:25:30.994 "io_timeout": 0, 00:25:30.994 "avg_latency_us": 9414.207476402891, 00:25:30.994 "min_latency_us": 358.4, 00:25:30.994 "max_latency_us": 31020.373333333333 00:25:30.994 } 00:25:30.994 ], 00:25:30.994 "core_count": 1 00:25:30.994 } 00:25:30.994 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2523323 00:25:30.994 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2523323 ']' 00:25:30.994 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2523323 00:25:30.994 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:30.994 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:30.994 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2523323 00:25:30.994 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:30.994 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:30.994 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2523323' 00:25:30.994 killing process with pid 2523323 00:25:30.994 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2523323 00:25:30.994 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2523323 00:25:30.994 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:30.994 [2024-11-06 14:06:59.953991] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:25:30.995 [2024-11-06 14:06:59.954070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523323 ] 00:25:30.995 [2024-11-06 14:07:00.050460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.995 [2024-11-06 14:07:00.092782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.995 Running I/O for 15 seconds... 00:25:30.995 11295.00 IOPS, 44.12 MiB/s [2024-11-06T13:07:17.275Z] [2024-11-06 14:07:02.584967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.995 [2024-11-06 14:07:02.585646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.995 [2024-11-06 14:07:02.585653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.585987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.585996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.996 [2024-11-06 14:07:02.586330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.996 [2024-11-06 14:07:02.586340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.997 [2024-11-06 14:07:02.586876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.997 [2024-11-06 14:07:02.586906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97864 len:8 PRP1 0x0 PRP2 0x0 00:25:30.997 [2024-11-06 14:07:02.586913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.997 [2024-11-06 14:07:02.586930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.997 [2024-11-06 14:07:02.586937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97872 len:8 PRP1 0x0 PRP2 0x0 00:25:30.997 [2024-11-06 14:07:02.586944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.997 [2024-11-06 14:07:02.586959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.997 [2024-11-06 14:07:02.586965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97880 len:8 PRP1 0x0 PRP2 0x0 00:25:30.997 [2024-11-06 14:07:02.586972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.586980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.997 [2024-11-06 14:07:02.586986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.997 [2024-11-06 14:07:02.586992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97888 len:8 PRP1 0x0 PRP2 0x0 00:25:30.997 [2024-11-06 14:07:02.586999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.997 [2024-11-06 14:07:02.587007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.997 [2024-11-06 14:07:02.587012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.997 [2024-11-06 14:07:02.587018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97896 len:8 PRP1 0x0 PRP2 0x0 00:25:30.997 [2024-11-06 14:07:02.587025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.587033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.998 [2024-11-06 14:07:02.587039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.998 [2024-11-06 14:07:02.587045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97904 len:8 PRP1 0x0 PRP2 0x0 00:25:30.998 [2024-11-06 14:07:02.587054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.587063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.998 [2024-11-06 14:07:02.587069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.998 [2024-11-06 14:07:02.587075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97912 len:8 PRP1 0x0 PRP2 0x0 00:25:30.998 [2024-11-06 14:07:02.587085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.587094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.998 [2024-11-06 14:07:02.587100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.998 [2024-11-06 14:07:02.587106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97920 len:8 PRP1 0x0 PRP2 0x0 00:25:30.998 [2024-11-06 14:07:02.587113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.587121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.998 [2024-11-06 14:07:02.587128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.998 [2024-11-06 14:07:02.587135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97928 len:8 PRP1 0x0 PRP2 0x0 00:25:30.998 [2024-11-06 14:07:02.587143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.587152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.998 [2024-11-06 14:07:02.587158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.998 [2024-11-06 14:07:02.587164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97936 len:8 PRP1 0x0 PRP2 0x0 00:25:30.998 [2024-11-06 14:07:02.587176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.587186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.998 [2024-11-06 14:07:02.587192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.998 [2024-11-06 14:07:02.587199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97944 len:8 PRP1 0x0 PRP2 0x0 00:25:30.998 [2024-11-06 14:07:02.587207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.587216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.998 [2024-11-06 14:07:02.587222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.998 [2024-11-06 14:07:02.587228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97952 len:8 PRP1 0x0 PRP2 0x0 00:25:30.998 [2024-11-06 14:07:02.587236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.587244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.998 [2024-11-06 14:07:02.587250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.998 [2024-11-06 14:07:02.587256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97960 len:8 PRP1 0x0 PRP2 0x0 00:25:30.998 [2024-11-06 14:07:02.587263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.587271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.998 [2024-11-06 14:07:02.587277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.998 [2024-11-06 14:07:02.587283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97968 len:8 PRP1 0x0 PRP2 0x0 00:25:30.998 [2024-11-06 14:07:02.587293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.587301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.998 [2024-11-06 14:07:02.587307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.998 [2024-11-06 14:07:02.587313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97976 len:8 PRP1 0x0 PRP2 0x0 00:25:30.998 [2024-11-06 14:07:02.587320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.587327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.998 [2024-11-06 14:07:02.587333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.998 [2024-11-06 14:07:02.587339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97984 len:8 PRP1 0x0 PRP2 0x0 00:25:30.998 [2024-11-06 14:07:02.598295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.598330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.998 [2024-11-06 14:07:02.598337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.998 [2024-11-06 14:07:02.598346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97992 len:8 PRP1 0x0 PRP2 0x0 00:25:30.998 [2024-11-06 14:07:02.598355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.598363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.998 [2024-11-06 14:07:02.598373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.998 [2024-11-06 14:07:02.598380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98000 len:8 PRP1 0x0 PRP2 0x0 00:25:30.998 [2024-11-06 14:07:02.598387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.598395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.998 [2024-11-06 14:07:02.598400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.998 [2024-11-06 14:07:02.598407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98008 len:8 PRP1 0x0 PRP2 0x0 00:25:30.998 [2024-11-06 14:07:02.598414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.598465] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:30.998 [2024-11-06 14:07:02.598496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.998 [2024-11-06 14:07:02.598505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.598515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.998 [2024-11-06 14:07:02.598523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.598531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.998 [2024-11-06 14:07:02.598539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.598548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.998 [2024-11-06 14:07:02.598555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:02.598562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:30.998 [2024-11-06 14:07:02.598609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d1fc0 (9): Bad file descriptor 00:25:30.998 [2024-11-06 14:07:02.602199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:30.998 [2024-11-06 14:07:02.762994] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:30.998 10238.00 IOPS, 39.99 MiB/s [2024-11-06T13:07:17.278Z] 10539.67 IOPS, 41.17 MiB/s [2024-11-06T13:07:17.278Z] 11090.00 IOPS, 43.32 MiB/s [2024-11-06T13:07:17.278Z] [2024-11-06 14:07:06.042843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.998 [2024-11-06 14:07:06.042873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:06.042886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.998 [2024-11-06 14:07:06.042893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:06.042900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.998 [2024-11-06 14:07:06.042905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:06.042912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.998 [2024-11-06 14:07:06.042921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:06.042928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.998 [2024-11-06 14:07:06.042933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:06.042941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.998 [2024-11-06 14:07:06.042946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:06.042953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.998 [2024-11-06 14:07:06.042958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:06.042965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.998 [2024-11-06 14:07:06.042971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.998 [2024-11-06 14:07:06.042977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.042982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.042989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.042994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.999 [2024-11-06 14:07:06.043381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.999 [2024-11-06 14:07:06.043387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.000 [2024-11-06 14:07:06.043860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.000 [2024-11-06 14:07:06.043867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.001 [2024-11-06 14:07:06.043871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.043878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.001 [2024-11-06 14:07:06.043883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.043889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.001 [2024-11-06 14:07:06.043894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.043901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.001 [2024-11-06 14:07:06.043906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.043913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.001 [2024-11-06 14:07:06.043918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.043924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.001 [2024-11-06 14:07:06.043929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.043944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.043950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66176 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.043956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.043963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.043968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.043972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66184 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.043977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.043982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.043986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.043991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66192 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.043997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.044003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.044007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.044011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66200 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.044016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.044021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.044026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.044033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66208 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.044038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.044043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.044048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.044052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66216 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.044057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.044062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.044066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.044071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66224 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.044075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.044081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.044085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.044089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66232 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.044094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.044099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.044103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.044107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66240 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.044112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.044118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.044121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.044125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66248 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.044130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.044135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.044141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.044146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66256 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.044151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.044156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.044160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.044164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66264 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.044169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.044174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.044178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.044182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66272 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.044187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.044192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.044196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.044201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66280 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.044206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.044211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.044215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.044219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66288 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.044224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.044229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.044233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.044237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66296 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.044241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.044247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.044251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.044255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66304 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.044260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.044265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.044269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.044273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66312 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.044278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.044284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.044288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.044292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66320 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.044297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.044303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.044307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.001 [2024-11-06 14:07:06.044311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66328 len:8 PRP1 0x0 PRP2 0x0 00:25:31.001 [2024-11-06 14:07:06.044316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.001 [2024-11-06 14:07:06.044321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.001 [2024-11-06 14:07:06.044328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.044333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66336 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.044338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.044343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.044347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.044351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66344 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.044356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.044362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.044365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.044369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66352 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.044374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.044379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.044383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.044387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66360 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.044392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.044398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.044402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.044406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66368 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.044411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.044417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.044420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.044424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66376 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.044430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.055735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.055767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.055777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66384 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.055786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.055793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.055799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.055805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66392 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.055812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.055819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.055824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.055830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66400 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.055836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.055844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.055849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.055855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66408 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.055862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.055868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.055873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.055879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66416 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.055886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.055893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.055898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.055904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66424 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.055910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.055919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.055924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.055930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66432 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.055937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.055944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.055948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.055959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66440 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.055965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.055973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.055978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.055984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66448 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.055990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.055998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.056003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.056009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66456 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.056016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.056022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.056030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.056037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66464 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.056045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.056052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.056058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.056064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66472 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.056071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.056078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.056084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.056089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66480 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.056096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.056105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.002 [2024-11-06 14:07:06.056110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.002 [2024-11-06 14:07:06.056116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66488 len:8 PRP1 0x0 PRP2 0x0 00:25:31.002 [2024-11-06 14:07:06.056123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.056163] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:31.002 [2024-11-06 14:07:06.056192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.002 [2024-11-06 14:07:06.056200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.056210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.002 [2024-11-06 14:07:06.056219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.056227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.002 [2024-11-06 14:07:06.056234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.056242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.002 [2024-11-06 14:07:06.056248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.002 [2024-11-06 14:07:06.056256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:31.002 [2024-11-06 14:07:06.056297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d1fc0 (9): Bad file descriptor 00:25:31.002 [2024-11-06 14:07:06.059547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:31.002 [2024-11-06 14:07:06.211494] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:31.002 10994.60 IOPS, 42.95 MiB/s [2024-11-06T13:07:17.282Z] 11329.33 IOPS, 44.26 MiB/s [2024-11-06T13:07:17.282Z] 11575.00 IOPS, 45.21 MiB/s [2024-11-06T13:07:17.282Z] 11733.12 IOPS, 45.83 MiB/s [2024-11-06T13:07:17.283Z] [2024-11-06 14:07:10.420881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.003 [2024-11-06 14:07:10.420912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.420926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.420933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.420940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.420945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.420952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.420957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.420964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.420969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.420976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.420980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.420987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.420992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.420999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.003 [2024-11-06 14:07:10.421331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.003 [2024-11-06 14:07:10.421336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.004 [2024-11-06 14:07:10.421713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.004 [2024-11-06 14:07:10.421725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.004 [2024-11-06 14:07:10.421737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.004 [2024-11-06 14:07:10.421753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:43104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.004 [2024-11-06 14:07:10.421765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.004 [2024-11-06 14:07:10.421778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.004 [2024-11-06 14:07:10.421789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.004 [2024-11-06 14:07:10.421803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.004 [2024-11-06 14:07:10.421809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.421815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.421821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.421826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.421833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.421838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.421845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.421850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.421857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.421862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.421869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.421874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.421880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.421885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.421892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.421897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.421903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.421909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.421915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.421922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.421928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.421933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.421939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.421944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.421950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.421956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.421962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.421967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.421974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.421979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.421986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.421991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.421997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.422003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.422009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.422014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.422021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.422026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.422032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.422037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.422043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.422048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.422055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.422061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.422067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.422075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.422082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.422087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.422093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.422099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.422105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.422111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.422117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.005 [2024-11-06 14:07:10.422123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.422140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.005 [2024-11-06 14:07:10.422146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43960 len:8 PRP1 0x0 PRP2 0x0 00:25:31.005 [2024-11-06 14:07:10.422151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.422309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.005 [2024-11-06 14:07:10.422315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.005 [2024-11-06 14:07:10.422320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43968 len:8 PRP1 0x0 PRP2 0x0 00:25:31.005 [2024-11-06 14:07:10.422325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.422332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.005 [2024-11-06 14:07:10.422336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.005 [2024-11-06 14:07:10.422340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43976 len:8 PRP1 0x0 PRP2 0x0 00:25:31.005 [2024-11-06 14:07:10.422346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.422351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.005 [2024-11-06 14:07:10.422355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.005 [2024-11-06 14:07:10.422360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43984 len:8 PRP1 0x0 PRP2 0x0 00:25:31.005 [2024-11-06 14:07:10.422365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.422371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.005 [2024-11-06 14:07:10.422375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.005 [2024-11-06 14:07:10.422379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43992 len:8 PRP1 0x0 PRP2 0x0 00:25:31.005 [2024-11-06 14:07:10.422385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.422391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.005 [2024-11-06 14:07:10.422395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.005 [2024-11-06 14:07:10.422399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44000 len:8 PRP1 0x0 PRP2 0x0 00:25:31.005 [2024-11-06 14:07:10.422404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.422411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.005 [2024-11-06 14:07:10.422416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.005 [2024-11-06 14:07:10.422420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44008 len:8 PRP1 0x0 PRP2 0x0 00:25:31.005 [2024-11-06 14:07:10.422425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.005 [2024-11-06 14:07:10.422431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.005 [2024-11-06 14:07:10.422435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.005 [2024-11-06 14:07:10.422439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44016 len:8 PRP1 0x0 PRP2 0x0 00:25:31.005 [2024-11-06 14:07:10.422448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.422453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.422457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.422462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44024 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.422467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.422473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.422477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.422481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44032 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.422486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.422492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.422496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.422500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44040 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.422505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.422510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.422514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.422518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44048 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.422523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.422529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.422533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.422538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44056 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.422544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.422549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.422552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.422556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44064 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.422561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.422568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.422572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.422577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44072 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.422582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.422588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.422592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.422596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44080 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.422601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.422606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.422610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.422616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43128 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.422621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.422627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.422631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.422636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43136 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.422641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.422646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.422650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.422654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43144 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.422659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.422665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.422669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.422674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43152 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.434087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.434121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.434129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.434143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43160 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.434152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.434159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.434165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.434170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43168 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.434177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.434184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.434190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.434195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43176 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.434202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.434210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.434215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.434221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43184 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.434230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.434237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.434243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.434249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43192 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.434257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.434264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.434269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.434275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43200 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.434282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.434289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.434294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.434300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43208 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.434307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.006 [2024-11-06 14:07:10.434315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.006 [2024-11-06 14:07:10.434320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.006 [2024-11-06 14:07:10.434326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43064 len:8 PRP1 0x0 PRP2 0x0 00:25:31.006 [2024-11-06 14:07:10.434333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43216 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43224 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43232 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43240 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43248 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43256 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43264 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43272 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43280 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43288 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43296 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43304 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43312 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43320 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43328 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43336 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43344 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43352 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43360 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43368 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43376 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43384 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.007 [2024-11-06 14:07:10.434930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43392 len:8 PRP1 0x0 PRP2 0x0 00:25:31.007 [2024-11-06 14:07:10.434936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.007 [2024-11-06 14:07:10.434945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.007 [2024-11-06 14:07:10.434952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.434958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43400 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.434964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.434972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.434977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.434983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43408 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.434989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.434997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43416 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43424 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43432 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43440 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43448 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43456 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43464 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43472 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43480 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43488 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43496 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43504 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43512 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43520 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43528 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43536 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43544 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43552 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43560 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43568 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43576 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.008 [2024-11-06 14:07:10.435538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.008 [2024-11-06 14:07:10.435543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.008 [2024-11-06 14:07:10.435550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43584 len:8 PRP1 0x0 PRP2 0x0 00:25:31.008 [2024-11-06 14:07:10.435557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.435565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.435570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.435576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43592 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.435583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.435590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.435595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.435600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43600 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.435607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.435615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.435621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.435626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43608 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.435633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.435640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.435645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.435651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43616 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.435658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.435665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.435670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.435676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43624 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.435684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.443440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.443471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.443483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43632 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.443495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.443505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.443512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.443522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43640 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.443532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.443542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.443554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.443563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43648 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.443573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.443583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.443591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.443599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43656 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.443609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.443619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.443627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.443635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43664 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.443644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.443654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.443662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.443670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43672 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.443680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.443690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.443697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.443705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43680 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.443714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.443724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.443731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.443739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43688 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.443760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.443771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.443778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.443786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43696 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.443796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.443805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.443813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.443823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43704 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.443832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.443844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.443852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.443860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43712 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.443870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.443880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.443888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.443896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43720 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.443905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.443915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.443922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.443930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43728 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.443939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.443950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.443956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.443964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43072 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.443973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.443985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.443993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.444000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43080 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.444010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.444020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.444027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.444035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43088 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.444044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.444055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.444063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.444071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43096 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.444081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.444091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.444099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.009 [2024-11-06 14:07:10.444106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43104 len:8 PRP1 0x0 PRP2 0x0 00:25:31.009 [2024-11-06 14:07:10.444117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.009 [2024-11-06 14:07:10.444128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.009 [2024-11-06 14:07:10.444135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43112 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43120 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43736 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43744 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43752 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43760 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43768 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43776 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43784 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43792 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43800 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43808 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43816 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43824 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43832 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43840 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43848 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43856 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43864 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43872 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43880 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43888 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43896 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.010 [2024-11-06 14:07:10.444932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.010 [2024-11-06 14:07:10.444940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.010 [2024-11-06 14:07:10.444948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43904 len:8 PRP1 0x0 PRP2 0x0 00:25:31.010 [2024-11-06 14:07:10.444957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.011 [2024-11-06 14:07:10.444967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.011 [2024-11-06 14:07:10.444974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.011 [2024-11-06 14:07:10.444982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43912 len:8 PRP1 0x0 PRP2 0x0 00:25:31.011 [2024-11-06 14:07:10.444992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.011 [2024-11-06 14:07:10.445002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.011 [2024-11-06 14:07:10.445009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.011 [2024-11-06 14:07:10.445017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43920 len:8 PRP1 0x0 PRP2 0x0 00:25:31.011 [2024-11-06 14:07:10.445026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.011 [2024-11-06 14:07:10.445036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.011 [2024-11-06 14:07:10.445044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.011 [2024-11-06 14:07:10.445051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43928 len:8 PRP1 0x0 PRP2 0x0 00:25:31.011 [2024-11-06 14:07:10.445061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.011 [2024-11-06 14:07:10.445071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.011 [2024-11-06 14:07:10.445078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.011 [2024-11-06 14:07:10.445086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43936 len:8 PRP1 0x0 PRP2 0x0 00:25:31.011 [2024-11-06 14:07:10.445095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.011 [2024-11-06 14:07:10.445105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.011 [2024-11-06 14:07:10.445113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.011 [2024-11-06 14:07:10.445121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43944 len:8 PRP1 0x0 PRP2 0x0 00:25:31.011 [2024-11-06 14:07:10.445130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.011 [2024-11-06 14:07:10.445140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.011 [2024-11-06 14:07:10.445147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.011 [2024-11-06 14:07:10.445155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43952 len:8 PRP1 0x0 PRP2 0x0 00:25:31.011 [2024-11-06 14:07:10.445164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.011 [2024-11-06 14:07:10.445174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.011 [2024-11-06 14:07:10.445181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.011 [2024-11-06 14:07:10.445191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43960 len:8 PRP1 0x0 PRP2 0x0 00:25:31.011 [2024-11-06 14:07:10.445201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.011 [2024-11-06 14:07:10.445252] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:31.011 [2024-11-06 14:07:10.445288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.011 [2024-11-06 14:07:10.445300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.011 [2024-11-06 14:07:10.445312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.011 [2024-11-06 14:07:10.445322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.011 [2024-11-06 14:07:10.445333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.011 [2024-11-06 14:07:10.445343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.011 [2024-11-06 14:07:10.445353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.011 [2024-11-06 14:07:10.445362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.011 [2024-11-06 14:07:10.445372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:31.011 [2024-11-06 14:07:10.445411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d1fc0 (9): Bad file descriptor 00:25:31.011 [2024-11-06 14:07:10.449851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:31.011 11706.78 IOPS, 45.73 MiB/s [2024-11-06T13:07:17.291Z] [2024-11-06 14:07:10.560096] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:31.011 11802.20 IOPS, 46.10 MiB/s [2024-11-06T13:07:17.291Z] 11913.27 IOPS, 46.54 MiB/s [2024-11-06T13:07:17.291Z] 12007.92 IOPS, 46.91 MiB/s [2024-11-06T13:07:17.291Z] 12086.77 IOPS, 47.21 MiB/s [2024-11-06T13:07:17.291Z] 12151.36 IOPS, 47.47 MiB/s 00:25:31.011 Latency(us) 00:25:31.011 [2024-11-06T13:07:17.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.011 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:31.011 Verification LBA range: start 0x0 length 0x4000 00:25:31.011 NVMe0n1 : 15.00 12200.63 47.66 1366.32 0.00 9414.21 358.40 31020.37 00:25:31.011 [2024-11-06T13:07:17.291Z] =================================================================================================================== 00:25:31.011 [2024-11-06T13:07:17.291Z] Total : 12200.63 47.66 1366.32 0.00 9414.21 358.40 31020.37 00:25:31.011 Received shutdown signal, test time was about 15.000000 seconds 00:25:31.011 00:25:31.011 Latency(us) 00:25:31.011 [2024-11-06T13:07:17.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.011 [2024-11-06T13:07:17.291Z] =================================================================================================================== 00:25:31.011 [2024-11-06T13:07:17.291Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:31.011 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:31.011 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:31.011 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:31.011 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2526531 00:25:31.011 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2526531 /var/tmp/bdevperf.sock 00:25:31.011 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:31.011 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2526531 ']' 00:25:31.011 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:31.011 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:31.011 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:31.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:31.011 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:31.011 14:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:31.581 14:07:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:31.581 14:07:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:31.581 14:07:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:31.581 [2024-11-06 14:07:17.757457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:31.581 14:07:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:31.841 [2024-11-06 14:07:17.945918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:31.841 14:07:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:32.101 NVMe0n1 00:25:32.361 14:07:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:32.622 00:25:32.622 14:07:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:32.915 00:25:32.915 14:07:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:32.915 14:07:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:33.210 14:07:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:33.210 14:07:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:36.552 14:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:36.552 14:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:36.552 14:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2527634 00:25:36.552 14:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:36.552 14:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2527634 00:25:37.490 { 00:25:37.490 "results": [ 00:25:37.490 { 00:25:37.490 "job": "NVMe0n1", 00:25:37.490 "core_mask": "0x1", 00:25:37.491 "workload": "verify", 00:25:37.491 "status": "finished", 00:25:37.491 "verify_range": { 00:25:37.491 "start": 0, 00:25:37.491 "length": 16384 00:25:37.491 }, 00:25:37.491 "queue_depth": 128, 00:25:37.491 "io_size": 4096, 00:25:37.491 "runtime": 1.004976, 00:25:37.491 "iops": 12985.384725605387, 00:25:37.491 "mibps": 50.72415908439604, 00:25:37.491 "io_failed": 0, 00:25:37.491 "io_timeout": 0, 00:25:37.491 "avg_latency_us": 9816.710310089402, 00:25:37.491 "min_latency_us": 1037.6533333333334, 00:25:37.491 "max_latency_us": 9994.24 00:25:37.491 } 00:25:37.491 ], 00:25:37.491 "core_count": 1 00:25:37.491 } 00:25:37.491 14:07:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:37.491 [2024-11-06 14:07:16.797328] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:25:37.491 [2024-11-06 14:07:16.797386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526531 ] 00:25:37.491 [2024-11-06 14:07:16.882762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.491 [2024-11-06 14:07:16.912625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.491 [2024-11-06 14:07:19.380885] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:37.491 [2024-11-06 14:07:19.380925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.491 [2024-11-06 14:07:19.380934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.491 [2024-11-06 14:07:19.380941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.491 [2024-11-06 14:07:19.380947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.491 [2024-11-06 14:07:19.380953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.491 [2024-11-06 14:07:19.380959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.491 [2024-11-06 14:07:19.380964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.491 [2024-11-06 14:07:19.380970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.491 [2024-11-06 14:07:19.380975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:37.491 [2024-11-06 14:07:19.380996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:37.491 [2024-11-06 14:07:19.381007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d6fc0 (9): Bad file descriptor 00:25:37.491 [2024-11-06 14:07:19.474906] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:37.491 Running I/O for 1 seconds... 00:25:37.491 12922.00 IOPS, 50.48 MiB/s 00:25:37.491 Latency(us) 00:25:37.491 [2024-11-06T13:07:23.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.491 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:37.491 Verification LBA range: start 0x0 length 0x4000 00:25:37.491 NVMe0n1 : 1.00 12985.38 50.72 0.00 0.00 9816.71 1037.65 9994.24 00:25:37.491 [2024-11-06T13:07:23.771Z] =================================================================================================================== 00:25:37.491 [2024-11-06T13:07:23.771Z] Total : 12985.38 50.72 0.00 0.00 9816.71 1037.65 9994.24 00:25:37.491 14:07:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:37.491 14:07:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:37.751 14:07:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:38.011 14:07:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:38.011 14:07:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:38.011 14:07:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:38.271 14:07:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:41.564 14:07:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:41.564 14:07:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:41.564 14:07:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2526531 00:25:41.564 14:07:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2526531 ']' 00:25:41.564 14:07:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2526531 00:25:41.564 14:07:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:41.564 14:07:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:41.564 14:07:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2526531 00:25:41.564 14:07:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:41.564 14:07:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:41.564 14:07:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2526531' 00:25:41.564 killing process with pid 2526531 00:25:41.564 14:07:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2526531 00:25:41.564 14:07:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2526531 00:25:41.564 14:07:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:41.564 14:07:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:41.823 rmmod nvme_tcp 00:25:41.823 rmmod nvme_fabrics 00:25:41.823 rmmod nvme_keyring 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2522821 ']' 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2522821 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2522821 ']' 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2522821 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:41.823 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2522821 00:25:42.084 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:42.084 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:42.084 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2522821' 00:25:42.084 killing process with pid 2522821 00:25:42.084 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2522821 00:25:42.084 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2522821 00:25:42.084 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:42.084 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:42.084 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:42.084 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:42.084 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:42.084 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:42.084 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:42.084 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:42.084 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:42.084 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.084 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.084 14:07:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:44.630 00:25:44.630 real 0m40.735s 00:25:44.630 user 2m4.761s 00:25:44.630 sys 0m8.915s 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:44.630 ************************************ 00:25:44.630 END TEST nvmf_failover 00:25:44.630 ************************************ 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.630 ************************************ 00:25:44.630 START TEST nvmf_host_discovery 00:25:44.630 ************************************ 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:44.630 * Looking for test storage... 00:25:44.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:44.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.630 --rc genhtml_branch_coverage=1 00:25:44.630 --rc genhtml_function_coverage=1 00:25:44.630 --rc genhtml_legend=1 00:25:44.630 --rc geninfo_all_blocks=1 00:25:44.630 --rc geninfo_unexecuted_blocks=1 00:25:44.630 00:25:44.630 ' 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:44.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.630 --rc genhtml_branch_coverage=1 00:25:44.630 --rc genhtml_function_coverage=1 00:25:44.630 --rc genhtml_legend=1 00:25:44.630 --rc geninfo_all_blocks=1 00:25:44.630 --rc geninfo_unexecuted_blocks=1 00:25:44.630 00:25:44.630 ' 00:25:44.630 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:44.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.630 --rc genhtml_branch_coverage=1 00:25:44.630 --rc genhtml_function_coverage=1 00:25:44.630 --rc genhtml_legend=1 00:25:44.630 --rc geninfo_all_blocks=1 00:25:44.630 --rc geninfo_unexecuted_blocks=1 00:25:44.630 00:25:44.631 ' 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:44.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.631 --rc genhtml_branch_coverage=1 00:25:44.631 --rc genhtml_function_coverage=1 00:25:44.631 --rc genhtml_legend=1 00:25:44.631 --rc geninfo_all_blocks=1 00:25:44.631 --rc geninfo_unexecuted_blocks=1 00:25:44.631 00:25:44.631 ' 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:44.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:44.631 14:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:52.775 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:52.776 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:52.776 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:52.776 Found net devices under 0000:31:00.0: cvl_0_0 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:52.776 Found net devices under 0000:31:00.1: cvl_0_1 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:52.776 14:07:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:52.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:52.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:25:52.776 00:25:52.776 --- 10.0.0.2 ping statistics --- 00:25:52.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.776 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:52.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:52.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:25:52.776 00:25:52.776 --- 10.0.0.1 ping statistics --- 00:25:52.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.776 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2532929 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2532929 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 2532929 ']' 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.776 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:52.777 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.777 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:52.777 14:07:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.777 [2024-11-06 14:07:38.326539] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:25:52.777 [2024-11-06 14:07:38.326603] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.777 [2024-11-06 14:07:38.425782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.777 [2024-11-06 14:07:38.475549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.777 [2024-11-06 14:07:38.475599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.777 [2024-11-06 14:07:38.475608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.777 [2024-11-06 14:07:38.475615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.777 [2024-11-06 14:07:38.475621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.777 [2024-11-06 14:07:38.476455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.038 [2024-11-06 14:07:39.193018] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.038 [2024-11-06 14:07:39.205290] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.038 null0 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.038 null1 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.038 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2533241 00:25:53.039 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:53.039 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2533241 /tmp/host.sock 00:25:53.039 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 2533241 ']' 00:25:53.039 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:25:53.039 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:53.039 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:53.039 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:53.039 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:53.039 14:07:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.039 [2024-11-06 14:07:39.301521] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:25:53.039 [2024-11-06 14:07:39.301585] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2533241 ] 00:25:53.299 [2024-11-06 14:07:39.395128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.299 [2024-11-06 14:07:39.447337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.870 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:53.870 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:25:53.870 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:53.870 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:53.870 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.870 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:54.131 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.391 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:54.391 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.392 [2024-11-06 14:07:40.496626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.392 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.652 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:54.652 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:54.652 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:54.652 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:54.652 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:54.652 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:54.652 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:54.652 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:54.652 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.652 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.652 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:54.652 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:54.652 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.652 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:25:54.652 14:07:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:25:54.913 [2024-11-06 14:07:41.171663] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:54.913 [2024-11-06 14:07:41.171694] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:54.913 [2024-11-06 14:07:41.171709] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:55.173 [2024-11-06 14:07:41.299103] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:55.173 [2024-11-06 14:07:41.401152] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:55.173 [2024-11-06 14:07:41.402366] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x23098c0:1 started. 00:25:55.173 [2024-11-06 14:07:41.404187] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:55.173 [2024-11-06 14:07:41.404213] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:55.173 [2024-11-06 14:07:41.411042] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x23098c0 was disconnected and freed. delete nvme_qpair. 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:55.743 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.744 14:07:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:56.005 [2024-11-06 14:07:42.190827] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2309aa0:1 started. 00:25:56.005 [2024-11-06 14:07:42.202208] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2309aa0 was disconnected and freed. delete nvme_qpair. 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.005 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.005 [2024-11-06 14:07:42.281172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:56.005 [2024-11-06 14:07:42.281444] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:56.005 [2024-11-06 14:07:42.281468] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:56.266 [2024-11-06 14:07:42.408262] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:56.266 14:07:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:25:56.266 [2024-11-06 14:07:42.507192] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:56.266 [2024-11-06 14:07:42.507230] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:56.266 [2024-11-06 14:07:42.507239] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:56.266 [2024-11-06 14:07:42.507244] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:57.208 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.470 [2024-11-06 14:07:43.537007] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:57.470 [2024-11-06 14:07:43.537024] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:57.470 [2024-11-06 14:07:43.545442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.470 [2024-11-06 14:07:43.545456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.470 [2024-11-06 14:07:43.545463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.470 [2024-11-06 14:07:43.545469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.470 [2024-11-06 14:07:43.545475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.470 [2024-11-06 14:07:43.545480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.470 [2024-11-06 14:07:43.545486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.470 [2024-11-06 14:07:43.545491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.470 [2024-11-06 14:07:43.545496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9fd0 is same with the state(6) to be set 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:57.470 [2024-11-06 14:07:43.555457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9fd0 (9): Bad file descriptor 00:25:57.470 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.470 [2024-11-06 14:07:43.565492] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:57.470 [2024-11-06 14:07:43.565501] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:57.470 [2024-11-06 14:07:43.565504] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:57.470 [2024-11-06 14:07:43.565509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:57.470 [2024-11-06 14:07:43.565527] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:57.470 [2024-11-06 14:07:43.565713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.470 [2024-11-06 14:07:43.565725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d9fd0 with addr=10.0.0.2, port=4420 00:25:57.470 [2024-11-06 14:07:43.565731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9fd0 is same with the state(6) to be set 00:25:57.470 [2024-11-06 14:07:43.565739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9fd0 (9): Bad file descriptor 00:25:57.470 [2024-11-06 14:07:43.565751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:57.470 [2024-11-06 14:07:43.565756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:57.470 [2024-11-06 14:07:43.565763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:57.470 [2024-11-06 14:07:43.565768] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:57.470 [2024-11-06 14:07:43.565772] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:57.470 [2024-11-06 14:07:43.565775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:57.470 [2024-11-06 14:07:43.575555] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:57.470 [2024-11-06 14:07:43.575563] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:57.470 [2024-11-06 14:07:43.575567] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:57.470 [2024-11-06 14:07:43.575570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:57.470 [2024-11-06 14:07:43.575581] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:57.470 [2024-11-06 14:07:43.576003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.470 [2024-11-06 14:07:43.576034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d9fd0 with addr=10.0.0.2, port=4420 00:25:57.470 [2024-11-06 14:07:43.576043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9fd0 is same with the state(6) to be set 00:25:57.470 [2024-11-06 14:07:43.576057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9fd0 (9): Bad file descriptor 00:25:57.470 [2024-11-06 14:07:43.576066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:57.470 [2024-11-06 14:07:43.576071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:57.470 [2024-11-06 14:07:43.576077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:57.470 [2024-11-06 14:07:43.576083] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:57.470 [2024-11-06 14:07:43.576087] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:57.470 [2024-11-06 14:07:43.576090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:57.470 [2024-11-06 14:07:43.585611] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:57.470 [2024-11-06 14:07:43.585623] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:57.470 [2024-11-06 14:07:43.585627] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:57.470 [2024-11-06 14:07:43.585634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:57.470 [2024-11-06 14:07:43.585647] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:57.471 [2024-11-06 14:07:43.586077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.471 [2024-11-06 14:07:43.586108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d9fd0 with addr=10.0.0.2, port=4420 00:25:57.471 [2024-11-06 14:07:43.586117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9fd0 is same with the state(6) to be set 00:25:57.471 [2024-11-06 14:07:43.586131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9fd0 (9): Bad file descriptor 00:25:57.471 [2024-11-06 14:07:43.586140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:57.471 [2024-11-06 14:07:43.586145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:57.471 [2024-11-06 14:07:43.586151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:57.471 [2024-11-06 14:07:43.586156] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:57.471 [2024-11-06 14:07:43.586160] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:57.471 [2024-11-06 14:07:43.586163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:57.471 [2024-11-06 14:07:43.595677] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpa 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:57.471 irs for reset. 00:25:57.471 [2024-11-06 14:07:43.595693] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:57.471 [2024-11-06 14:07:43.595697] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:57.471 [2024-11-06 14:07:43.595700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:57.471 [2024-11-06 14:07:43.595712] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:57.471 [2024-11-06 14:07:43.596024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.471 [2024-11-06 14:07:43.596036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d9fd0 with addr=10.0.0.2, port=4420 00:25:57.471 [2024-11-06 14:07:43.596042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9fd0 is same with the state(6) to be set 00:25:57.471 [2024-11-06 14:07:43.596050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9fd0 (9): Bad file descriptor 00:25:57.471 [2024-11-06 14:07:43.596057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:57.471 [2024-11-06 14:07:43.596062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:57.471 [2024-11-06 14:07:43.596067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:57.471 [2024-11-06 14:07:43.596072] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:57.471 [2024-11-06 14:07:43.596075] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:57.471 [2024-11-06 14:07:43.596084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.471 [2024-11-06 14:07:43.605741] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:57.471 [2024-11-06 14:07:43.605754] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:57.471 [2024-11-06 14:07:43.605758] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:57.471 [2024-11-06 14:07:43.605761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:57.471 [2024-11-06 14:07:43.605772] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:57.471 [2024-11-06 14:07:43.606136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.471 [2024-11-06 14:07:43.606146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d9fd0 with addr=10.0.0.2, port=4420 00:25:57.471 [2024-11-06 14:07:43.606152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9fd0 is same with the state(6) to be set 00:25:57.471 [2024-11-06 14:07:43.606159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9fd0 (9): Bad file descriptor 00:25:57.471 [2024-11-06 14:07:43.606167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:57.471 [2024-11-06 14:07:43.606171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:57.471 [2024-11-06 14:07:43.606177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:57.471 [2024-11-06 14:07:43.606181] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:57.471 [2024-11-06 14:07:43.606184] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:57.471 [2024-11-06 14:07:43.606187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:57.471 [2024-11-06 14:07:43.615801] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:57.471 [2024-11-06 14:07:43.615811] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:57.471 [2024-11-06 14:07:43.615814] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:57.471 [2024-11-06 14:07:43.615817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:57.471 [2024-11-06 14:07:43.615829] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:57.471 [2024-11-06 14:07:43.616175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.471 [2024-11-06 14:07:43.616184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d9fd0 with addr=10.0.0.2, port=4420 00:25:57.471 [2024-11-06 14:07:43.616190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9fd0 is same with the state(6) to be set 00:25:57.471 [2024-11-06 14:07:43.616198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9fd0 (9): Bad file descriptor 00:25:57.471 [2024-11-06 14:07:43.616205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:57.471 [2024-11-06 14:07:43.616210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:57.471 [2024-11-06 14:07:43.616215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:57.471 [2024-11-06 14:07:43.616220] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:57.471 [2024-11-06 14:07:43.616223] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:57.471 [2024-11-06 14:07:43.616226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:57.471 [2024-11-06 14:07:43.623274] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:57.471 [2024-11-06 14:07:43.623287] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:25:57.471 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:57.472 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:57.472 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:57.472 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:57.472 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:57.472 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:57.472 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:57.472 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:57.472 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:57.472 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:57.472 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:57.472 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.472 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.472 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.733 14:07:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.117 [2024-11-06 14:07:45.002931] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:59.117 [2024-11-06 14:07:45.002946] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:59.117 [2024-11-06 14:07:45.002955] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:59.117 [2024-11-06 14:07:45.090200] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:59.377 [2024-11-06 14:07:45.400611] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:59.377 [2024-11-06 14:07:45.401315] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x22f11d0:1 started. 00:25:59.377 [2024-11-06 14:07:45.402670] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:59.377 [2024-11-06 14:07:45.402692] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:59.377 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.377 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.377 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:59.377 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.377 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:59.377 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:59.377 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:59.377 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:59.377 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.377 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.377 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.377 [2024-11-06 14:07:45.410865] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x22f11d0 was disconnected and freed. delete nvme_qpair. 00:25:59.377 request: 00:25:59.377 { 00:25:59.377 "name": "nvme", 00:25:59.377 "trtype": "tcp", 00:25:59.377 "traddr": "10.0.0.2", 00:25:59.377 "adrfam": "ipv4", 00:25:59.377 "trsvcid": "8009", 00:25:59.377 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:59.377 "wait_for_attach": true, 00:25:59.377 "method": "bdev_nvme_start_discovery", 00:25:59.377 "req_id": 1 00:25:59.377 } 00:25:59.377 Got JSON-RPC error response 00:25:59.377 response: 00:25:59.377 { 00:25:59.377 "code": -17, 00:25:59.377 "message": "File exists" 00:25:59.377 } 00:25:59.377 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.378 request: 00:25:59.378 { 00:25:59.378 "name": "nvme_second", 00:25:59.378 "trtype": "tcp", 00:25:59.378 "traddr": "10.0.0.2", 00:25:59.378 "adrfam": "ipv4", 00:25:59.378 "trsvcid": "8009", 00:25:59.378 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:59.378 "wait_for_attach": true, 00:25:59.378 "method": "bdev_nvme_start_discovery", 00:25:59.378 "req_id": 1 00:25:59.378 } 00:25:59.378 Got JSON-RPC error response 00:25:59.378 response: 00:25:59.378 { 00:25:59.378 "code": -17, 00:25:59.378 "message": "File exists" 00:25:59.378 } 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:59.378 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:59.638 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.638 14:07:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.578 [2024-11-06 14:07:46.662345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.578 [2024-11-06 14:07:46.662368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f3900 with addr=10.0.0.2, port=8010 00:26:00.578 [2024-11-06 14:07:46.662379] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:00.578 [2024-11-06 14:07:46.662385] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:00.578 [2024-11-06 14:07:46.662390] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:01.520 [2024-11-06 14:07:47.664565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.520 [2024-11-06 14:07:47.664585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f3900 with addr=10.0.0.2, port=8010 00:26:01.520 [2024-11-06 14:07:47.664594] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:01.520 [2024-11-06 14:07:47.664599] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:01.520 [2024-11-06 14:07:47.664604] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:02.462 [2024-11-06 14:07:48.666680] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:02.462 request: 00:26:02.462 { 00:26:02.462 "name": "nvme_second", 00:26:02.462 "trtype": "tcp", 00:26:02.462 "traddr": "10.0.0.2", 00:26:02.462 "adrfam": "ipv4", 00:26:02.462 "trsvcid": "8010", 00:26:02.462 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:02.462 "wait_for_attach": false, 00:26:02.462 "attach_timeout_ms": 3000, 00:26:02.462 "method": "bdev_nvme_start_discovery", 00:26:02.462 "req_id": 1 00:26:02.462 } 00:26:02.462 Got JSON-RPC error response 00:26:02.462 response: 00:26:02.462 { 00:26:02.462 "code": -110, 00:26:02.462 "message": "Connection timed out" 00:26:02.462 } 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2533241 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:02.462 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:02.462 rmmod nvme_tcp 00:26:02.723 rmmod nvme_fabrics 00:26:02.723 rmmod nvme_keyring 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2532929 ']' 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2532929 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 2532929 ']' 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 2532929 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2532929 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2532929' 00:26:02.723 killing process with pid 2532929 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 2532929 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 2532929 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.723 14:07:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:05.270 00:26:05.270 real 0m20.612s 00:26:05.270 user 0m23.993s 00:26:05.270 sys 0m7.298s 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.270 ************************************ 00:26:05.270 END TEST nvmf_host_discovery 00:26:05.270 ************************************ 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.270 ************************************ 00:26:05.270 START TEST nvmf_host_multipath_status 00:26:05.270 ************************************ 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:05.270 * Looking for test storage... 00:26:05.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:05.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.270 --rc genhtml_branch_coverage=1 00:26:05.270 --rc genhtml_function_coverage=1 00:26:05.270 --rc genhtml_legend=1 00:26:05.270 --rc geninfo_all_blocks=1 00:26:05.270 --rc geninfo_unexecuted_blocks=1 00:26:05.270 00:26:05.270 ' 00:26:05.270 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:05.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.270 --rc genhtml_branch_coverage=1 00:26:05.271 --rc genhtml_function_coverage=1 00:26:05.271 --rc genhtml_legend=1 00:26:05.271 --rc geninfo_all_blocks=1 00:26:05.271 --rc geninfo_unexecuted_blocks=1 00:26:05.271 00:26:05.271 ' 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:05.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.271 --rc genhtml_branch_coverage=1 00:26:05.271 --rc genhtml_function_coverage=1 00:26:05.271 --rc genhtml_legend=1 00:26:05.271 --rc geninfo_all_blocks=1 00:26:05.271 --rc geninfo_unexecuted_blocks=1 00:26:05.271 00:26:05.271 ' 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:05.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.271 --rc genhtml_branch_coverage=1 00:26:05.271 --rc genhtml_function_coverage=1 00:26:05.271 --rc genhtml_legend=1 00:26:05.271 --rc geninfo_all_blocks=1 00:26:05.271 --rc geninfo_unexecuted_blocks=1 00:26:05.271 00:26:05.271 ' 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:05.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:05.271 14:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:13.419 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:13.419 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:13.419 Found net devices under 0000:31:00.0: cvl_0_0 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.419 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:13.420 Found net devices under 0000:31:00.1: cvl_0_1 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:13.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:26:13.420 00:26:13.420 --- 10.0.0.2 ping statistics --- 00:26:13.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.420 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:26:13.420 00:26:13.420 --- 10.0.0.1 ping statistics --- 00:26:13.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.420 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2539419 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2539419 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 2539419 ']' 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:13.420 14:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.420 [2024-11-06 14:07:59.043272] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:26:13.420 [2024-11-06 14:07:59.043347] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.420 [2024-11-06 14:07:59.143174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:13.420 [2024-11-06 14:07:59.193899] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.420 [2024-11-06 14:07:59.193951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.420 [2024-11-06 14:07:59.193961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.420 [2024-11-06 14:07:59.193968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.420 [2024-11-06 14:07:59.193975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.420 [2024-11-06 14:07:59.195654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.420 [2024-11-06 14:07:59.195659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.681 14:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:13.681 14:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:13.681 14:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:13.681 14:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:13.681 14:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.681 14:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.681 14:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2539419 00:26:13.681 14:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:13.943 [2024-11-06 14:08:00.076526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.943 14:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:14.207 Malloc0 00:26:14.207 14:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:14.469 14:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:14.469 14:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:14.730 [2024-11-06 14:08:00.901882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.730 14:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:14.991 [2024-11-06 14:08:01.094384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:14.991 14:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2539853 00:26:14.991 14:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:14.991 14:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:14.991 14:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2539853 /var/tmp/bdevperf.sock 00:26:14.991 14:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 2539853 ']' 00:26:14.991 14:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:14.991 14:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:14.991 14:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:14.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:14.991 14:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:14.991 14:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:15.933 14:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:15.933 14:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:15.933 14:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:15.933 14:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:16.506 Nvme0n1 00:26:16.506 14:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:17.076 Nvme0n1 00:26:17.076 14:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:17.076 14:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:18.992 14:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:18.992 14:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:19.253 14:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:19.253 14:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:20.194 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:20.194 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:20.194 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.194 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:20.455 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.455 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:20.455 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.455 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:20.716 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:20.716 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:20.716 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.716 14:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:20.976 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.976 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:20.977 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.977 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:20.977 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.977 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:20.977 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.977 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:21.237 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.237 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:21.237 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.237 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:21.498 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.498 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:21.498 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:21.759 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:21.759 14:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:22.700 14:08:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:22.700 14:08:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:22.700 14:08:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.700 14:08:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:22.961 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.961 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:22.961 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:22.961 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.223 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.223 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:23.223 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.223 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:23.484 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.484 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:23.484 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.484 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:23.484 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.484 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:23.484 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.484 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:23.745 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.745 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:23.745 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.745 14:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:24.004 14:08:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.004 14:08:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:24.004 14:08:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:24.004 14:08:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:24.264 14:08:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:25.204 14:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:25.204 14:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:25.204 14:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.204 14:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:25.464 14:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.464 14:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:25.464 14:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.464 14:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:25.723 14:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:25.723 14:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:25.723 14:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:25.723 14:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.982 14:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.982 14:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:25.982 14:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.982 14:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:25.982 14:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.982 14:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:25.982 14:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.982 14:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:26.241 14:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.241 14:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:26.241 14:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.241 14:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:26.554 14:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.554 14:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:26.554 14:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:26.554 14:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:26.814 14:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:27.754 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:27.754 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:27.754 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.754 14:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:28.014 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.014 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:28.014 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.014 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:28.274 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.274 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:28.274 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.274 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:28.274 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.274 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:28.274 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.274 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:28.556 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.556 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:28.556 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.556 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:28.884 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.884 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:28.884 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:28.884 14:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.884 14:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.884 14:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:28.884 14:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:29.145 14:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:29.145 14:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:30.523 14:08:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:30.523 14:08:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:30.523 14:08:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.523 14:08:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:30.523 14:08:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.523 14:08:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:30.523 14:08:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.524 14:08:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:30.524 14:08:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.524 14:08:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:30.524 14:08:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.524 14:08:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:30.784 14:08:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.784 14:08:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:30.784 14:08:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.784 14:08:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:31.044 14:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.044 14:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:31.044 14:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.045 14:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:31.045 14:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:31.045 14:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:31.045 14:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.045 14:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:31.305 14:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:31.305 14:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:31.306 14:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:31.566 14:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:31.826 14:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:32.765 14:08:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:32.765 14:08:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:32.765 14:08:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.765 14:08:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:33.024 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:33.024 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:33.024 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.024 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:33.024 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.024 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:33.024 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:33.024 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.283 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.283 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:33.283 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.283 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:33.543 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.543 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:33.543 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.543 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:33.543 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:33.543 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:33.543 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.543 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:33.803 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.803 14:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:34.063 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:34.063 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:34.063 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:34.322 14:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:35.261 14:08:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:35.261 14:08:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:35.261 14:08:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.261 14:08:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:35.521 14:08:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.521 14:08:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:35.521 14:08:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.521 14:08:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:35.780 14:08:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.780 14:08:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:35.780 14:08:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.780 14:08:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:36.040 14:08:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.040 14:08:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:36.040 14:08:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.040 14:08:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:36.040 14:08:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.040 14:08:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:36.040 14:08:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.040 14:08:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:36.300 14:08:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.300 14:08:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:36.300 14:08:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:36.300 14:08:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.560 14:08:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.560 14:08:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:36.560 14:08:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:36.560 14:08:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:36.821 14:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:37.761 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:37.761 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:37.761 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.761 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:38.021 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:38.021 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:38.021 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.021 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:38.281 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.281 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:38.281 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.281 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:38.542 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.542 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:38.542 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.542 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:38.542 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.542 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:38.542 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.542 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:38.802 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.802 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:38.802 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.802 14:08:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:39.062 14:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.062 14:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:39.062 14:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:39.062 14:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:39.322 14:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:40.262 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:40.262 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:40.262 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.262 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:40.522 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.522 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:40.522 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.522 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:40.782 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.782 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:40.782 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.783 14:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:40.783 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.783 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:40.783 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.783 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:41.043 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.043 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:41.043 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.043 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:41.304 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.304 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:41.304 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.304 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:41.563 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.563 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:41.563 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:41.563 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:41.821 14:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:42.761 14:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:42.761 14:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:42.761 14:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.761 14:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:43.021 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.021 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:43.021 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.021 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:43.282 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:43.282 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:43.282 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.282 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:43.282 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.282 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:43.282 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.282 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:43.542 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.542 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:43.542 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.542 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:43.801 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.801 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:43.801 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.801 14:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:44.084 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:44.084 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2539853 00:26:44.084 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 2539853 ']' 00:26:44.084 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 2539853 00:26:44.084 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:44.084 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:44.084 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2539853 00:26:44.084 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:26:44.084 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:26:44.084 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2539853' 00:26:44.084 killing process with pid 2539853 00:26:44.084 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 2539853 00:26:44.084 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 2539853 00:26:44.084 { 00:26:44.084 "results": [ 00:26:44.084 { 00:26:44.084 "job": "Nvme0n1", 00:26:44.084 "core_mask": "0x4", 00:26:44.084 "workload": "verify", 00:26:44.084 "status": "terminated", 00:26:44.084 "verify_range": { 00:26:44.084 "start": 0, 00:26:44.084 "length": 16384 00:26:44.084 }, 00:26:44.084 "queue_depth": 128, 00:26:44.084 "io_size": 4096, 00:26:44.084 "runtime": 26.943274, 00:26:44.084 "iops": 11932.996710050902, 00:26:44.084 "mibps": 46.613268398636336, 00:26:44.084 "io_failed": 0, 00:26:44.084 "io_timeout": 0, 00:26:44.084 "avg_latency_us": 10706.735491041345, 00:26:44.084 "min_latency_us": 638.2933333333333, 00:26:44.084 "max_latency_us": 3089803.946666667 00:26:44.084 } 00:26:44.084 ], 00:26:44.084 "core_count": 1 00:26:44.084 } 00:26:44.084 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2539853 00:26:44.084 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:44.084 [2024-11-06 14:08:01.180523] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:26:44.084 [2024-11-06 14:08:01.180599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2539853 ] 00:26:44.084 [2024-11-06 14:08:01.273171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.084 [2024-11-06 14:08:01.323843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.084 Running I/O for 90 seconds... 00:26:44.084 10969.00 IOPS, 42.85 MiB/s [2024-11-06T13:08:30.364Z] 11091.50 IOPS, 43.33 MiB/s [2024-11-06T13:08:30.364Z] 11090.00 IOPS, 43.32 MiB/s [2024-11-06T13:08:30.364Z] 11501.25 IOPS, 44.93 MiB/s [2024-11-06T13:08:30.364Z] 11812.40 IOPS, 46.14 MiB/s [2024-11-06T13:08:30.364Z] 12001.00 IOPS, 46.88 MiB/s [2024-11-06T13:08:30.364Z] 12103.43 IOPS, 47.28 MiB/s [2024-11-06T13:08:30.364Z] 12217.12 IOPS, 47.72 MiB/s [2024-11-06T13:08:30.364Z] 12300.22 IOPS, 48.05 MiB/s [2024-11-06T13:08:30.364Z] 12362.20 IOPS, 48.29 MiB/s [2024-11-06T13:08:30.364Z] 12413.55 IOPS, 48.49 MiB/s [2024-11-06T13:08:30.364Z] 12454.58 IOPS, 48.65 MiB/s [2024-11-06T13:08:30.364Z] [2024-11-06 14:08:15.227082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.084 [2024-11-06 14:08:15.227114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:44.084 [2024-11-06 14:08:15.227132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.084 [2024-11-06 14:08:15.227138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:44.084 [2024-11-06 14:08:15.227149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.084 [2024-11-06 14:08:15.227155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:44.084 [2024-11-06 14:08:15.227165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.084 [2024-11-06 14:08:15.227171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:44.084 [2024-11-06 14:08:15.227181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.084 [2024-11-06 14:08:15.227187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:44.084 [2024-11-06 14:08:15.227197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.084 [2024-11-06 14:08:15.227202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:44.084 [2024-11-06 14:08:15.227213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.084 [2024-11-06 14:08:15.227218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:44.084 [2024-11-06 14:08:15.227228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.084 [2024-11-06 14:08:15.227233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:44.084 [2024-11-06 14:08:15.227245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.084 [2024-11-06 14:08:15.227250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:44.084 [2024-11-06 14:08:15.227261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.084 [2024-11-06 14:08:15.227271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:44.084 [2024-11-06 14:08:15.227281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.084 [2024-11-06 14:08:15.227287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:44.084 [2024-11-06 14:08:15.227297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.084 [2024-11-06 14:08:15.227302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:44.084 [2024-11-06 14:08:15.227313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.084 [2024-11-06 14:08:15.227318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:44.084 [2024-11-06 14:08:15.227328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.084 [2024-11-06 14:08:15.227335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:44.084 [2024-11-06 14:08:15.227346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.084 [2024-11-06 14:08:15.227352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:44.084 [2024-11-06 14:08:15.227362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.084 [2024-11-06 14:08:15.227368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:44.084 [2024-11-06 14:08:15.227379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.084 [2024-11-06 14:08:15.227386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:44.084 [2024-11-06 14:08:15.227916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.084 [2024-11-06 14:08:15.227924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.084 [2024-11-06 14:08:15.227936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.084 [2024-11-06 14:08:15.227941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.227952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.227958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.227968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.227973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.227984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.227992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.085 [2024-11-06 14:08:15.228606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.085 [2024-11-06 14:08:15.228622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.085 [2024-11-06 14:08:15.228638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:44.085 [2024-11-06 14:08:15.228648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.086 [2024-11-06 14:08:15.228653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.086 [2024-11-06 14:08:15.228669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.086 [2024-11-06 14:08:15.228685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.086 [2024-11-06 14:08:15.228701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.086 [2024-11-06 14:08:15.228717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.228734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.228755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.228770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.228785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.228801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.228816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.228831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.228847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.228863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.228880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.228895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.228916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.228932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.228947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.228963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.228979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.228990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.228995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.229005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.229010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.229021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.229027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.229038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.229043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.229374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.229384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.229396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.229402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.229412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.229417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.229428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.229433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.229446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.229451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.229462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.229467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.229477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.229482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.229493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.229499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.229510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.229515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.229525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.229531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.229541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.229546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.229556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.229562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:44.086 [2024-11-06 14:08:15.229572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.086 [2024-11-06 14:08:15.229577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.229994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.229999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.230009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.230015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.230325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.230337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.230349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.230354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.230364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.230369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.230379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.230384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.230395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.230401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.230411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.230416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:44.087 [2024-11-06 14:08:15.230427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.087 [2024-11-06 14:08:15.230432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.088 [2024-11-06 14:08:15.230479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.230844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.230850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.231105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.231114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.231126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.231131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.231141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.231146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.231156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.231162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.088 [2024-11-06 14:08:15.231173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.088 [2024-11-06 14:08:15.231180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.231191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.231196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.231206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.231211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.231222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.231228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.231238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.231243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.231255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.231261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.231272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.231277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.231288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.231293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.231303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.231309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.231320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.231326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.231336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.231342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.231352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.231357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.231368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.231375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.231386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.231391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.231401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.231407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.242755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.242777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.242789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.242795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.242806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.242811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.242821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.242827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.242837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.242843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.243118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.243136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.243152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.243168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.089 [2024-11-06 14:08:15.243185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.089 [2024-11-06 14:08:15.243204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.089 [2024-11-06 14:08:15.243220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.089 [2024-11-06 14:08:15.243236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.089 [2024-11-06 14:08:15.243253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.089 [2024-11-06 14:08:15.243268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.089 [2024-11-06 14:08:15.243285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.243301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.243316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.243333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.243349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.243364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.243379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.243398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.089 [2024-11-06 14:08:15.243414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:44.089 [2024-11-06 14:08:15.243424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.243984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.243996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.244002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.244013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.244018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.244028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.244034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.244044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.090 [2024-11-06 14:08:15.244051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:44.090 [2024-11-06 14:08:15.244061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.091 [2024-11-06 14:08:15.244425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:44.091 [2024-11-06 14:08:15.244660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.091 [2024-11-06 14:08:15.244665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.244675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.244681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.244692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.244697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.244707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.244713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.244723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.244728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.244740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.244750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.244761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.244766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.244777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.244782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.245922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.245927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.246121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.246132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.246144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.246149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.246160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.246165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.246176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.246181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.246191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.092 [2024-11-06 14:08:15.246197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:44.092 [2024-11-06 14:08:15.246208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.092 [2024-11-06 14:08:15.246213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.246224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.093 [2024-11-06 14:08:15.246231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.246241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.093 [2024-11-06 14:08:15.246246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.246257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.093 [2024-11-06 14:08:15.246262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.246273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.093 [2024-11-06 14:08:15.246279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.246289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.093 [2024-11-06 14:08:15.246295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.246305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.093 [2024-11-06 14:08:15.246312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.253987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.093 [2024-11-06 14:08:15.254867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.093 [2024-11-06 14:08:15.254875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.254890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.254898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.254912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.254919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.254933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.254943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.254957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.254965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.254979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.254986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.094 [2024-11-06 14:08:15.255697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:44.094 [2024-11-06 14:08:15.255711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.255718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.255733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.255741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.255760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.255768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.255782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.255789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.255803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.255810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.255824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.255831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.255845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.255852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.255866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.255873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.255888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.095 [2024-11-06 14:08:15.255895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.255909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.255916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.255931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.255938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.255951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.255959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.255973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.255980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.255995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.095 [2024-11-06 14:08:15.256536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:44.095 [2024-11-06 14:08:15.256550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.256557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.256572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.256579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.256593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.256601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.256615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.256622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.256636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.256644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.256658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.256666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.256680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.256688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.256702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.256709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.256724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.256732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.256748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.256756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.256770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.256778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.256793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.256801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.256815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.256823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.256837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.256844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.256859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.256867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.257842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.257858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.257875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.257883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.257897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.257905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.257919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.257927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.257941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.257949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.257963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.257971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.257986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.096 [2024-11-06 14:08:15.257993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.258007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.096 [2024-11-06 14:08:15.258014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.258028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.096 [2024-11-06 14:08:15.258048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.258064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.096 [2024-11-06 14:08:15.258071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.258085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.096 [2024-11-06 14:08:15.258093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.258108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.096 [2024-11-06 14:08:15.258116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.258130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.096 [2024-11-06 14:08:15.258138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.258152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.258159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.258173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.258181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.258194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.258202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.258216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.258223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.258238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.258245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.258259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.258267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.096 [2024-11-06 14:08:15.258280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.096 [2024-11-06 14:08:15.258289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.258302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.258311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.258326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.258333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.258347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.258355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.258369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.258376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.258390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.258397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.258413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.258420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.258434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.258442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.258456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.258463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.258477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.258485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.258499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.258506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.258819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.258831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.258846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.258854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.258868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.258876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.258893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.258900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.258915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.258922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.258937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.258944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.258958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.258966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.258980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.258988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.259001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.259009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.259023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.259031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.259045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.259052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.259066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.259074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.259088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.259095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.259110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.259117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.259132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.259139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.259155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.259163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.259176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.259184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.259198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.259206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.259220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.259228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.259242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.259250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.259264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.259271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.259286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.259293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.259307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.259315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.259329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.259336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.259350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.259358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:44.097 [2024-11-06 14:08:15.259372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.097 [2024-11-06 14:08:15.259380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.098 [2024-11-06 14:08:15.259970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.259983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.259991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.260004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.260012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.260026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.260034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.260590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.260602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.260618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.098 [2024-11-06 14:08:15.260626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:44.098 [2024-11-06 14:08:15.260640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.260647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.260661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.260669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.260682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.260690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.260704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.260712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.260726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.260733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.260751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.260759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.260776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.260784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.260799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.260806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.260820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.260827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.260841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.260849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.260863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.260870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.260885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.260892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.260906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.260913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.260927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.260935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.260948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.260956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.260970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.260977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.260991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.260999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.261013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.261022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.261036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.261044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.261059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.261066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.261079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.261087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.261100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.261108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.261122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.261129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.261144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.261151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.261165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.261173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.261187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.261194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.261208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.261216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.261229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.261237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.261251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.261259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.261273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.261280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.261294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.261304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.261318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.261325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.261339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.266509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.266554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.266566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.266581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.099 [2024-11-06 14:08:15.266590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:44.099 [2024-11-06 14:08:15.266605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.266614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.266629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.266637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.266653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.266661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.266677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.266687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.266702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.266711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.266726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.266735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.266759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.266769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.266785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.266793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.266813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.266821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.266836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.266844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.266860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.266867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.266883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.100 [2024-11-06 14:08:15.266891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.266907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.100 [2024-11-06 14:08:15.266915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.266930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.100 [2024-11-06 14:08:15.266938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.266953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.100 [2024-11-06 14:08:15.266961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.266977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.100 [2024-11-06 14:08:15.266985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.267000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.100 [2024-11-06 14:08:15.267008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.267024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.100 [2024-11-06 14:08:15.267033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.267048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.267056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.267071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.267079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.267097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.267105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.267121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.267129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.267144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.267152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.267167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.267175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.267190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.267198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.267213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.267222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.267237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.267245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.267260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.267269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.267285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.267293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.267307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.267315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.267330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.267339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.267354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.267362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.267377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.267386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.267403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.267411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.268249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.268266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.268286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.268295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.268310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.268320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.268336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.268344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.268360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.100 [2024-11-06 14:08:15.268369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:44.100 [2024-11-06 14:08:15.268384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.268982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.268998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.269006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.269021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.269029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.269045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.269053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.269068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.269077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.269091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.269100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.269120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.269128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.269143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.269151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.269166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.269174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.269189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.269198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.269213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.269221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.269236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.101 [2024-11-06 14:08:15.269244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:44.101 [2024-11-06 14:08:15.269260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.269268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.269283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.269292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.269306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.269315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.269330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.269338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.269353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.269362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.269377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.269386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.269401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.269410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.269426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.269434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.269449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.269457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.269473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.269481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.269496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.269505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.269520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.269528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.269544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.102 [2024-11-06 14:08:15.269552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.269567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.269576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.269591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.269600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:44.102 [2024-11-06 14:08:15.270691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.102 [2024-11-06 14:08:15.270700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.270715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.270723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.270739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.270752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.270768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.270776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.270791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.270800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.270815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.270823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.270838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.270847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.270863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.270871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.270886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.270895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.270910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.270918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.270935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.270944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.270959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.270968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.270983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.270991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.271014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.271039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.271063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.271087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.271111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.271134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.271158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.271182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.271206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.271231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.271255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.271278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.271302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.271326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.271349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.271373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.271397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.103 [2024-11-06 14:08:15.271421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.103 [2024-11-06 14:08:15.271446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.103 [2024-11-06 14:08:15.271469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.103 [2024-11-06 14:08:15.271493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:44.103 [2024-11-06 14:08:15.271508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.103 [2024-11-06 14:08:15.271518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.271534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.104 [2024-11-06 14:08:15.271543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.271557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.104 [2024-11-06 14:08:15.271566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.271581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.104 [2024-11-06 14:08:15.271589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.271605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.271613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.271628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.271637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.271652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.271660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.271675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.271683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.271698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.271706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.271722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.271730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.271750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.271759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.271774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.271782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.271797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.271805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.271823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.271831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.271846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.271854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.271870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.271878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.271893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.271901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.271918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.271925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.271942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.271951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.272888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.272905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.272926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.272937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.272957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.272967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.272986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.272996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.273016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.273026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.273045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.273055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.273078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.273088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.273109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.273119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.273138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.273149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.273168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.273179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.273198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.273208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.273227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.273237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.273256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.273266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.273286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.273296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.273315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.273325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.273345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.273355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.273374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.273385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.273404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.273414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.273433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.273444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.273464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.104 [2024-11-06 14:08:15.273474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:44.104 [2024-11-06 14:08:15.273493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.273503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.273522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.273532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.273552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.273563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.273582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.273592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.273611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.273621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.273641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.273651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.273670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.273680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.273699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.273709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.273728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.273739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.273763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.273773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.273793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.273806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.273825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.273835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.273855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.273865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.273884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.273894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.273914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.273924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.273943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.273954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.273973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.273983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.274013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.274043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.274074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.274104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.274133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.274162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.274194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.274223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.274253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.274282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.274312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.274340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.274370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.274399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.274428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.274457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.274487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.105 [2024-11-06 14:08:15.274515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.274537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.274547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.275321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.275336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.275359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.105 [2024-11-06 14:08:15.275370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:44.105 [2024-11-06 14:08:15.275388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.275973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.275984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.276004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.276015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.276035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.276049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.276069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.276079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.276100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.276111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.276130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.276140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.276160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.276171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.276190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.276200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.276220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.276231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.276250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.276261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.276280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.276290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.276310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.276320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.276339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.276349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.276368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.276378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.276398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.276408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.276432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.276442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.276462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.276472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.276492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.106 [2024-11-06 14:08:15.276503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:44.106 [2024-11-06 14:08:15.276522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.276533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.276553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.276563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.276582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.276592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.276612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.276622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.276643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.276653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.276672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.276683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.276702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.276713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.276731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.276741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.276765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.276776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.276797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.276807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.276827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.276837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.276857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.107 [2024-11-06 14:08:15.276868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.276887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.107 [2024-11-06 14:08:15.276897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.276917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.107 [2024-11-06 14:08:15.276928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.276948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.107 [2024-11-06 14:08:15.276958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.276977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.107 [2024-11-06 14:08:15.276987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.277007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.107 [2024-11-06 14:08:15.277018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.277038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.107 [2024-11-06 14:08:15.277048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.277068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.277079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.277099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.277110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.277129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.277140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.277159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.277172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.277191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.277202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.277221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.277232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.277252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.277262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.277281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.277292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.277311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.277322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.277341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.277351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.277371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.277381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.277400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.277410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.277430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.277440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.277459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.277470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.278448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.107 [2024-11-06 14:08:15.278465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:44.107 [2024-11-06 14:08:15.278487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.278501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.278522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.278533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.278553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.278564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.278584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.278595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.278614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.278624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.278645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.278655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.278674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.278685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.278705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.278716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.278735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.278752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.278772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.278783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.278803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.278813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.278833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.278844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.278863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.278875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.278896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.278907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.278927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.278937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.278957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.278967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.278987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.278998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.108 [2024-11-06 14:08:15.279644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:44.108 [2024-11-06 14:08:15.279665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.279675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.279694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.279705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.279725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.279736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.279759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.279769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.279789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.279800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.279819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.279829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.279849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.279862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.279883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.279894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.279913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.279924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.279944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.279954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.279974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.279985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.280005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.280017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.280036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.280047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.280067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.280077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.280097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.280107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.280127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.280137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.280157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.109 [2024-11-06 14:08:15.280168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.280933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.280949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.280971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.280982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.109 [2024-11-06 14:08:15.281658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:44.109 [2024-11-06 14:08:15.281677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.281689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.281709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.281720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.281741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.281757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.281776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.281788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.281808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.281820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.281839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.281850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.281870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.281881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.281900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.281914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.281933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.281944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.281964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.281976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.281995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.110 [2024-11-06 14:08:15.282526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.110 [2024-11-06 14:08:15.282557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.110 [2024-11-06 14:08:15.282588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.110 [2024-11-06 14:08:15.282618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.110 [2024-11-06 14:08:15.282649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.110 [2024-11-06 14:08:15.282681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.110 [2024-11-06 14:08:15.282712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.110 [2024-11-06 14:08:15.282901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.110 [2024-11-06 14:08:15.282920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.282931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.282951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.282961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.282981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.282993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.283012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.283022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.283043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.283055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.283075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.283086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.283106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.283117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.283860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.283875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.283891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.283899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.283914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.283922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.283937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.283945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.283959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.283967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.283982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.283990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:44.111 [2024-11-06 14:08:15.284561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.111 [2024-11-06 14:08:15.284569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.284592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.284614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.284636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.284660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.284682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.284705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.284727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.284754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.284777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.284800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.284822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.284844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.284866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.284888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.284911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.284936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.284958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.284981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.284995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.285004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.285018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.285026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.285041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.285048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.285063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.285072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.285086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.285095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.285109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.285117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.285131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.285140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.285697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.112 [2024-11-06 14:08:15.285708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.285724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.285732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.285750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.285761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.285776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.285784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.285798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.285806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.285820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.285828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.285842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.285850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.285865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.285873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.285887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.285895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.285910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.285918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.285932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.112 [2024-11-06 14:08:15.285940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:44.112 [2024-11-06 14:08:15.285954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.285962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.285977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.285985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.285999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.113 [2024-11-06 14:08:15.286777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.113 [2024-11-06 14:08:15.286791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.286799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.286813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.286821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.286834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.286842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.286857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.286866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.286881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.114 [2024-11-06 14:08:15.286889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.286903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.114 [2024-11-06 14:08:15.286910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.286925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.114 [2024-11-06 14:08:15.286932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.286947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.114 [2024-11-06 14:08:15.286955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.286969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.114 [2024-11-06 14:08:15.286977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.286991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.114 [2024-11-06 14:08:15.286998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.287012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.114 [2024-11-06 14:08:15.287020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.287033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.287041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.287055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.287063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.287078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.287085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.287100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.287106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.287121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.287130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.287144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.287151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.287165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.287173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.287187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.287194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.287208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.287216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.287230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.287237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.287251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.287259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.287274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.287282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:44.114 [2024-11-06 14:08:15.288425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.114 [2024-11-06 14:08:15.288432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.288991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.288998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.289022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.289043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.289065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.289089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.289111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.289132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.289154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.289177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.289200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.289222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.289246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.289267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.289457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.115 [2024-11-06 14:08:15.289496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.289523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.289549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.289575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.289601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.115 [2024-11-06 14:08:15.289626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:44.115 [2024-11-06 14:08:15.289644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.289652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.289670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.289678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.289695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.289703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.289723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.289732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.289755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.289763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.289780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.289788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.289807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.289815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.289832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.289840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.289857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.289865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.289883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.289891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.289909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.289917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.289934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.289942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.289961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.289968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.289986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.289994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.116 [2024-11-06 14:08:15.290564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:44.116 [2024-11-06 14:08:15.290581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.290589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.290607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.290615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.290633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.290641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.290659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.290666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.290686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.290694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.290816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.290827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.290849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.290858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.290879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.290887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.290908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.290915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.290937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.290945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.290966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.117 [2024-11-06 14:08:15.290974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.290995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.117 [2024-11-06 14:08:15.291003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.291024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.117 [2024-11-06 14:08:15.291032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.291053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.117 [2024-11-06 14:08:15.291061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.291082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.117 [2024-11-06 14:08:15.291090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.291112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.117 [2024-11-06 14:08:15.291120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.291150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.117 [2024-11-06 14:08:15.291158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.291179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.291186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.291208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.291216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.291237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.291244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.291265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.291273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.291294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.291302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.291323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.291332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.291353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.291361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.291382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.291390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.291410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.291419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.291440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.291447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.291468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.291476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:15.291497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:15.291506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:44.117 11525.38 IOPS, 45.02 MiB/s [2024-11-06T13:08:30.397Z] 10702.14 IOPS, 41.81 MiB/s [2024-11-06T13:08:30.397Z] 9988.67 IOPS, 39.02 MiB/s [2024-11-06T13:08:30.397Z] 10095.81 IOPS, 39.44 MiB/s [2024-11-06T13:08:30.397Z] 10255.47 IOPS, 40.06 MiB/s [2024-11-06T13:08:30.397Z] 10591.39 IOPS, 41.37 MiB/s [2024-11-06T13:08:30.397Z] 10918.89 IOPS, 42.65 MiB/s [2024-11-06T13:08:30.397Z] 11150.40 IOPS, 43.56 MiB/s [2024-11-06T13:08:30.397Z] 11229.86 IOPS, 43.87 MiB/s [2024-11-06T13:08:30.397Z] 11304.64 IOPS, 44.16 MiB/s [2024-11-06T13:08:30.397Z] 11498.96 IOPS, 44.92 MiB/s [2024-11-06T13:08:30.397Z] 11713.58 IOPS, 45.76 MiB/s [2024-11-06T13:08:30.397Z] [2024-11-06 14:08:27.945306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:27.945343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:27.945374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:27.945381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:27.945392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:27.945397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:27.945408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:115872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:27.945413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:27.945424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:115888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:27.945429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:27.945440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:27.945445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:27.945455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:27.945461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:27.945471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:27.945477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:27.945487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:115952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:27.945492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:27.945503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.117 [2024-11-06 14:08:27.945508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:44.117 [2024-11-06 14:08:27.945519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.117 [2024-11-06 14:08:27.945529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.945539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.118 [2024-11-06 14:08:27.945545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.945555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.118 [2024-11-06 14:08:27.945561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:116056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:116104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:116120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:116152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:116200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:116216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:116312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:116424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.118 [2024-11-06 14:08:27.946607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.118 [2024-11-06 14:08:27.946625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.118 [2024-11-06 14:08:27.946640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.946766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.118 [2024-11-06 14:08:27.946771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:44.118 [2024-11-06 14:08:27.947117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.119 [2024-11-06 14:08:27.947126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:44.119 [2024-11-06 14:08:27.947138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.119 [2024-11-06 14:08:27.947143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:44.119 [2024-11-06 14:08:27.947157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.119 [2024-11-06 14:08:27.947163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:44.119 [2024-11-06 14:08:27.947174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.119 [2024-11-06 14:08:27.947179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:44.119 [2024-11-06 14:08:27.947190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.119 [2024-11-06 14:08:27.947194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.119 11866.76 IOPS, 46.35 MiB/s [2024-11-06T13:08:30.399Z] 11904.50 IOPS, 46.50 MiB/s [2024-11-06T13:08:30.399Z] Received shutdown signal, test time was about 26.943883 seconds 00:26:44.119 00:26:44.119 Latency(us) 00:26:44.119 [2024-11-06T13:08:30.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.119 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:44.119 Verification LBA range: start 0x0 length 0x4000 00:26:44.119 Nvme0n1 : 26.94 11933.00 46.61 0.00 0.00 10706.74 638.29 3089803.95 00:26:44.119 [2024-11-06T13:08:30.399Z] =================================================================================================================== 00:26:44.119 [2024-11-06T13:08:30.399Z] Total : 11933.00 46.61 0.00 0.00 10706.74 638.29 3089803.95 00:26:44.119 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:44.380 rmmod nvme_tcp 00:26:44.380 rmmod nvme_fabrics 00:26:44.380 rmmod nvme_keyring 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2539419 ']' 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2539419 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 2539419 ']' 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 2539419 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2539419 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2539419' 00:26:44.380 killing process with pid 2539419 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 2539419 00:26:44.380 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 2539419 00:26:44.640 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:44.640 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:44.640 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:44.640 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:44.640 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:44.640 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:44.640 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:44.640 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:44.640 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:44.640 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.640 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.640 14:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.548 14:08:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:46.548 00:26:46.548 real 0m41.683s 00:26:46.548 user 1m47.819s 00:26:46.548 sys 0m11.572s 00:26:46.548 14:08:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:46.548 14:08:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:46.548 ************************************ 00:26:46.548 END TEST nvmf_host_multipath_status 00:26:46.548 ************************************ 00:26:46.808 14:08:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:46.808 14:08:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:46.808 14:08:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:46.808 14:08:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.808 ************************************ 00:26:46.808 START TEST nvmf_discovery_remove_ifc 00:26:46.808 ************************************ 00:26:46.808 14:08:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:46.808 * Looking for test storage... 00:26:46.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:46.808 14:08:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:46.808 14:08:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:26:46.808 14:08:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:46.808 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:46.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.808 --rc genhtml_branch_coverage=1 00:26:46.808 --rc genhtml_function_coverage=1 00:26:46.808 --rc genhtml_legend=1 00:26:46.808 --rc geninfo_all_blocks=1 00:26:46.808 --rc geninfo_unexecuted_blocks=1 00:26:46.808 00:26:46.808 ' 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:47.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.069 --rc genhtml_branch_coverage=1 00:26:47.069 --rc genhtml_function_coverage=1 00:26:47.069 --rc genhtml_legend=1 00:26:47.069 --rc geninfo_all_blocks=1 00:26:47.069 --rc geninfo_unexecuted_blocks=1 00:26:47.069 00:26:47.069 ' 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:47.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.069 --rc genhtml_branch_coverage=1 00:26:47.069 --rc genhtml_function_coverage=1 00:26:47.069 --rc genhtml_legend=1 00:26:47.069 --rc geninfo_all_blocks=1 00:26:47.069 --rc geninfo_unexecuted_blocks=1 00:26:47.069 00:26:47.069 ' 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:47.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.069 --rc genhtml_branch_coverage=1 00:26:47.069 --rc genhtml_function_coverage=1 00:26:47.069 --rc genhtml_legend=1 00:26:47.069 --rc geninfo_all_blocks=1 00:26:47.069 --rc geninfo_unexecuted_blocks=1 00:26:47.069 00:26:47.069 ' 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:47.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:47.069 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:47.070 14:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:55.206 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:55.206 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:55.206 Found net devices under 0000:31:00.0: cvl_0_0 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:55.206 Found net devices under 0000:31:00.1: cvl_0_1 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:55.206 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:55.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:26:55.207 00:26:55.207 --- 10.0.0.2 ping statistics --- 00:26:55.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.207 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:55.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:26:55.207 00:26:55.207 --- 10.0.0.1 ping statistics --- 00:26:55.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.207 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2549782 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2549782 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 2549782 ']' 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:55.207 14:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.207 [2024-11-06 14:08:40.769477] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:26:55.207 [2024-11-06 14:08:40.769543] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.207 [2024-11-06 14:08:40.872346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.207 [2024-11-06 14:08:40.921653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.207 [2024-11-06 14:08:40.921710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.207 [2024-11-06 14:08:40.921719] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.207 [2024-11-06 14:08:40.921726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.207 [2024-11-06 14:08:40.921733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.207 [2024-11-06 14:08:40.922537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.468 [2024-11-06 14:08:41.658964] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.468 [2024-11-06 14:08:41.667291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:55.468 null0 00:26:55.468 [2024-11-06 14:08:41.699201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2550080 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2550080 /tmp/host.sock 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 2550080 ']' 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:55.468 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:55.468 14:08:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.729 [2024-11-06 14:08:41.775560] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:26:55.729 [2024-11-06 14:08:41.775632] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550080 ] 00:26:55.729 [2024-11-06 14:08:41.871067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.729 [2024-11-06 14:08:41.924053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.670 14:08:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:56.670 14:08:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:56.670 14:08:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:56.670 14:08:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:56.670 14:08:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.670 14:08:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.670 14:08:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.670 14:08:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:56.670 14:08:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.670 14:08:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.670 14:08:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.670 14:08:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:56.670 14:08:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.670 14:08:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.609 [2024-11-06 14:08:43.769948] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:57.609 [2024-11-06 14:08:43.769971] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:57.609 [2024-11-06 14:08:43.769988] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:57.609 [2024-11-06 14:08:43.856268] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:57.869 [2024-11-06 14:08:43.916989] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:57.869 [2024-11-06 14:08:43.918142] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1435550:1 started. 00:26:57.869 [2024-11-06 14:08:43.919722] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:57.869 [2024-11-06 14:08:43.919776] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:57.869 [2024-11-06 14:08:43.919799] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:57.869 [2024-11-06 14:08:43.919814] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:57.869 [2024-11-06 14:08:43.919835] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:57.869 14:08:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.869 14:08:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:57.869 14:08:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:57.869 14:08:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.869 14:08:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:57.869 [2024-11-06 14:08:43.926868] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1435550 was disconnected and freed. delete nvme_qpair. 00:26:57.869 14:08:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.869 14:08:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:57.869 14:08:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.869 14:08:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:57.869 14:08:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.869 14:08:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:57.869 14:08:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:57.869 14:08:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:57.869 14:08:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:57.869 14:08:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:57.869 14:08:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.869 14:08:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:57.869 14:08:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.869 14:08:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:57.869 14:08:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.869 14:08:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:57.869 14:08:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.130 14:08:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:58.130 14:08:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:59.072 14:08:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.072 14:08:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.072 14:08:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.072 14:08:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.072 14:08:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.072 14:08:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.072 14:08:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.072 14:08:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.072 14:08:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:59.072 14:08:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:00.011 14:08:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:00.011 14:08:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.011 14:08:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:00.011 14:08:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.011 14:08:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:00.011 14:08:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:00.011 14:08:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:00.011 14:08:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.011 14:08:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:00.011 14:08:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:01.394 14:08:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:01.394 14:08:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.394 14:08:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:01.394 14:08:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.394 14:08:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:01.394 14:08:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.394 14:08:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:01.394 14:08:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.394 14:08:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:01.394 14:08:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:02.335 14:08:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:02.335 14:08:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:02.335 14:08:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:02.335 14:08:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.335 14:08:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.335 14:08:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:02.336 14:08:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:02.336 14:08:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.336 14:08:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:02.336 14:08:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:03.277 [2024-11-06 14:08:49.370200] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:03.277 [2024-11-06 14:08:49.370237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.277 [2024-11-06 14:08:49.370247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-11-06 14:08:49.370254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.277 [2024-11-06 14:08:49.370260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-11-06 14:08:49.370266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.277 [2024-11-06 14:08:49.370271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-11-06 14:08:49.370277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.277 [2024-11-06 14:08:49.370282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-11-06 14:08:49.370288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.277 [2024-11-06 14:08:49.370293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.277 [2024-11-06 14:08:49.370299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1411ec0 is same with the state(6) to be set 00:27:03.277 [2024-11-06 14:08:49.380221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1411ec0 (9): Bad file descriptor 00:27:03.277 14:08:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:03.277 14:08:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:03.277 14:08:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:03.277 14:08:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.277 14:08:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:03.277 14:08:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.277 14:08:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:03.277 [2024-11-06 14:08:49.390258] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:03.277 [2024-11-06 14:08:49.390267] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:03.277 [2024-11-06 14:08:49.390271] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:03.277 [2024-11-06 14:08:49.390275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:03.277 [2024-11-06 14:08:49.390292] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:04.218 [2024-11-06 14:08:50.397808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:04.218 [2024-11-06 14:08:50.397920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1411ec0 with addr=10.0.0.2, port=4420 00:27:04.218 [2024-11-06 14:08:50.397955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1411ec0 is same with the state(6) to be set 00:27:04.218 [2024-11-06 14:08:50.398038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1411ec0 (9): Bad file descriptor 00:27:04.218 [2024-11-06 14:08:50.399208] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:04.218 [2024-11-06 14:08:50.399285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:04.218 [2024-11-06 14:08:50.399308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:04.218 [2024-11-06 14:08:50.399332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:04.218 [2024-11-06 14:08:50.399354] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:04.218 [2024-11-06 14:08:50.399371] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:04.218 [2024-11-06 14:08:50.399385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:04.218 [2024-11-06 14:08:50.399408] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:04.218 [2024-11-06 14:08:50.399423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:04.218 14:08:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.218 14:08:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:04.218 14:08:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:05.161 [2024-11-06 14:08:51.401844] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:05.161 [2024-11-06 14:08:51.401863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:05.161 [2024-11-06 14:08:51.401874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:05.161 [2024-11-06 14:08:51.401879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:05.161 [2024-11-06 14:08:51.401885] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:05.161 [2024-11-06 14:08:51.401891] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:05.161 [2024-11-06 14:08:51.401894] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:05.161 [2024-11-06 14:08:51.401898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:05.161 [2024-11-06 14:08:51.401918] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:05.161 [2024-11-06 14:08:51.401937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.161 [2024-11-06 14:08:51.401945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.161 [2024-11-06 14:08:51.401954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.161 [2024-11-06 14:08:51.401960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.161 [2024-11-06 14:08:51.401966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.161 [2024-11-06 14:08:51.401972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.161 [2024-11-06 14:08:51.401981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.161 [2024-11-06 14:08:51.401987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.161 [2024-11-06 14:08:51.401993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.161 [2024-11-06 14:08:51.401999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.161 [2024-11-06 14:08:51.402004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:05.161 [2024-11-06 14:08:51.402377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1401600 (9): Bad file descriptor 00:27:05.161 [2024-11-06 14:08:51.403388] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:05.161 [2024-11-06 14:08:51.403397] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:05.161 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:05.161 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:05.161 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:05.161 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.161 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:05.161 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.161 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:05.422 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.422 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:05.422 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.422 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.422 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:05.422 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:05.422 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:05.422 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:05.422 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.422 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:05.422 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.422 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:05.422 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.422 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:05.422 14:08:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:06.363 14:08:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.363 14:08:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.363 14:08:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.363 14:08:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.363 14:08:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.363 14:08:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.363 14:08:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.363 14:08:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.623 14:08:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:06.623 14:08:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:07.194 [2024-11-06 14:08:53.463914] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:07.194 [2024-11-06 14:08:53.463929] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:07.194 [2024-11-06 14:08:53.463939] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:07.453 [2024-11-06 14:08:53.552202] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:07.454 14:08:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:07.454 14:08:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.454 14:08:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:07.454 14:08:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.454 14:08:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:07.454 14:08:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.454 14:08:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:07.454 14:08:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.454 [2024-11-06 14:08:53.731244] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:07.714 [2024-11-06 14:08:53.732190] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x141c540:1 started. 00:27:07.714 [2024-11-06 14:08:53.733108] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:07.714 [2024-11-06 14:08:53.733141] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:07.714 [2024-11-06 14:08:53.733158] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:07.714 [2024-11-06 14:08:53.733170] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:07.714 [2024-11-06 14:08:53.733176] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:07.714 14:08:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:07.714 14:08:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:07.714 [2024-11-06 14:08:53.740843] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x141c540 was disconnected and freed. delete nvme_qpair. 00:27:08.654 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2550080 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 2550080 ']' 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 2550080 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2550080 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2550080' 00:27:08.655 killing process with pid 2550080 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 2550080 00:27:08.655 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 2550080 00:27:08.915 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:08.915 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:08.915 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:08.915 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:08.915 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:08.915 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:08.915 14:08:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:08.915 rmmod nvme_tcp 00:27:08.915 rmmod nvme_fabrics 00:27:08.915 rmmod nvme_keyring 00:27:08.915 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:08.915 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:08.915 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:08.915 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2549782 ']' 00:27:08.915 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2549782 00:27:08.915 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 2549782 ']' 00:27:08.915 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 2549782 00:27:08.915 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:27:08.915 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:08.915 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2549782 00:27:08.915 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:08.915 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:08.915 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2549782' 00:27:08.915 killing process with pid 2549782 00:27:08.915 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 2549782 00:27:08.915 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 2549782 00:27:08.916 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:08.916 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:08.916 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:08.916 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:08.916 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:08.916 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:08.916 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:09.177 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:09.177 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:09.177 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.177 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.177 14:08:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.092 14:08:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:11.092 00:27:11.092 real 0m24.383s 00:27:11.092 user 0m29.350s 00:27:11.092 sys 0m7.160s 00:27:11.092 14:08:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:11.092 14:08:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:11.092 ************************************ 00:27:11.092 END TEST nvmf_discovery_remove_ifc 00:27:11.092 ************************************ 00:27:11.092 14:08:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:11.092 14:08:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:11.092 14:08:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:11.092 14:08:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.092 ************************************ 00:27:11.092 START TEST nvmf_identify_kernel_target 00:27:11.092 ************************************ 00:27:11.092 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:11.353 * Looking for test storage... 00:27:11.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:11.353 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:11.353 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:11.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.354 --rc genhtml_branch_coverage=1 00:27:11.354 --rc genhtml_function_coverage=1 00:27:11.354 --rc genhtml_legend=1 00:27:11.354 --rc geninfo_all_blocks=1 00:27:11.354 --rc geninfo_unexecuted_blocks=1 00:27:11.354 00:27:11.354 ' 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:11.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.354 --rc genhtml_branch_coverage=1 00:27:11.354 --rc genhtml_function_coverage=1 00:27:11.354 --rc genhtml_legend=1 00:27:11.354 --rc geninfo_all_blocks=1 00:27:11.354 --rc geninfo_unexecuted_blocks=1 00:27:11.354 00:27:11.354 ' 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:11.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.354 --rc genhtml_branch_coverage=1 00:27:11.354 --rc genhtml_function_coverage=1 00:27:11.354 --rc genhtml_legend=1 00:27:11.354 --rc geninfo_all_blocks=1 00:27:11.354 --rc geninfo_unexecuted_blocks=1 00:27:11.354 00:27:11.354 ' 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:11.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.354 --rc genhtml_branch_coverage=1 00:27:11.354 --rc genhtml_function_coverage=1 00:27:11.354 --rc genhtml_legend=1 00:27:11.354 --rc geninfo_all_blocks=1 00:27:11.354 --rc geninfo_unexecuted_blocks=1 00:27:11.354 00:27:11.354 ' 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:11.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:11.354 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:11.355 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.355 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:11.355 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:11.355 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:11.355 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.355 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.355 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.355 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:11.355 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:11.355 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:11.355 14:08:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:19.627 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:19.627 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.627 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:19.628 Found net devices under 0000:31:00.0: cvl_0_0 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:19.628 Found net devices under 0000:31:00.1: cvl_0_1 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:19.628 14:09:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:19.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:19.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:27:19.628 00:27:19.628 --- 10.0.0.2 ping statistics --- 00:27:19.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.628 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:19.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:19.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:27:19.628 00:27:19.628 --- 10.0.0.1 ping statistics --- 00:27:19.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.628 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:19.628 14:09:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:22.932 Waiting for block devices as requested 00:27:22.932 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:22.932 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:22.932 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:22.932 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:22.932 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:23.194 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:23.194 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:23.194 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:23.459 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:23.459 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:23.721 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:23.721 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:23.721 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:23.982 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:23.982 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:23.982 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:24.243 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:24.504 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:24.504 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:24.504 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:24.504 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:24.504 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:24.504 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:24.504 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:24.504 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:24.504 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:24.504 No valid GPT data, bailing 00:27:24.504 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:24.504 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:24.504 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:24.504 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:24.504 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:24.504 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:24.504 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:24.505 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:24.505 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:24.505 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:24.505 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:24.505 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:24.505 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:24.505 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:24.505 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:24.505 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:24.505 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:24.505 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:27:24.820 00:27:24.820 Discovery Log Number of Records 2, Generation counter 2 00:27:24.820 =====Discovery Log Entry 0====== 00:27:24.820 trtype: tcp 00:27:24.820 adrfam: ipv4 00:27:24.820 subtype: current discovery subsystem 00:27:24.820 treq: not specified, sq flow control disable supported 00:27:24.820 portid: 1 00:27:24.820 trsvcid: 4420 00:27:24.820 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:24.820 traddr: 10.0.0.1 00:27:24.820 eflags: none 00:27:24.820 sectype: none 00:27:24.820 =====Discovery Log Entry 1====== 00:27:24.820 trtype: tcp 00:27:24.820 adrfam: ipv4 00:27:24.820 subtype: nvme subsystem 00:27:24.820 treq: not specified, sq flow control disable supported 00:27:24.820 portid: 1 00:27:24.820 trsvcid: 4420 00:27:24.820 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:24.820 traddr: 10.0.0.1 00:27:24.820 eflags: none 00:27:24.820 sectype: none 00:27:24.820 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:24.820 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:24.820 ===================================================== 00:27:24.820 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:24.820 ===================================================== 00:27:24.820 Controller Capabilities/Features 00:27:24.820 ================================ 00:27:24.820 Vendor ID: 0000 00:27:24.820 Subsystem Vendor ID: 0000 00:27:24.820 Serial Number: 3ae82eec9124adbcadca 00:27:24.820 Model Number: Linux 00:27:24.820 Firmware Version: 6.8.9-20 00:27:24.820 Recommended Arb Burst: 0 00:27:24.820 IEEE OUI Identifier: 00 00 00 00:27:24.820 Multi-path I/O 00:27:24.820 May have multiple subsystem ports: No 00:27:24.820 May have multiple controllers: No 00:27:24.820 Associated with SR-IOV VF: No 00:27:24.820 Max Data Transfer Size: Unlimited 00:27:24.820 Max Number of Namespaces: 0 00:27:24.820 Max Number of I/O Queues: 1024 00:27:24.820 NVMe Specification Version (VS): 1.3 00:27:24.820 NVMe Specification Version (Identify): 1.3 00:27:24.820 Maximum Queue Entries: 1024 00:27:24.820 Contiguous Queues Required: No 00:27:24.820 Arbitration Mechanisms Supported 00:27:24.820 Weighted Round Robin: Not Supported 00:27:24.820 Vendor Specific: Not Supported 00:27:24.820 Reset Timeout: 7500 ms 00:27:24.820 Doorbell Stride: 4 bytes 00:27:24.820 NVM Subsystem Reset: Not Supported 00:27:24.820 Command Sets Supported 00:27:24.820 NVM Command Set: Supported 00:27:24.820 Boot Partition: Not Supported 00:27:24.820 Memory Page Size Minimum: 4096 bytes 00:27:24.820 Memory Page Size Maximum: 4096 bytes 00:27:24.820 Persistent Memory Region: Not Supported 00:27:24.820 Optional Asynchronous Events Supported 00:27:24.820 Namespace Attribute Notices: Not Supported 00:27:24.820 Firmware Activation Notices: Not Supported 00:27:24.820 ANA Change Notices: Not Supported 00:27:24.820 PLE Aggregate Log Change Notices: Not Supported 00:27:24.820 LBA Status Info Alert Notices: Not Supported 00:27:24.820 EGE Aggregate Log Change Notices: Not Supported 00:27:24.820 Normal NVM Subsystem Shutdown event: Not Supported 00:27:24.820 Zone Descriptor Change Notices: Not Supported 00:27:24.820 Discovery Log Change Notices: Supported 00:27:24.820 Controller Attributes 00:27:24.820 128-bit Host Identifier: Not Supported 00:27:24.820 Non-Operational Permissive Mode: Not Supported 00:27:24.820 NVM Sets: Not Supported 00:27:24.820 Read Recovery Levels: Not Supported 00:27:24.820 Endurance Groups: Not Supported 00:27:24.820 Predictable Latency Mode: Not Supported 00:27:24.820 Traffic Based Keep ALive: Not Supported 00:27:24.820 Namespace Granularity: Not Supported 00:27:24.820 SQ Associations: Not Supported 00:27:24.820 UUID List: Not Supported 00:27:24.820 Multi-Domain Subsystem: Not Supported 00:27:24.820 Fixed Capacity Management: Not Supported 00:27:24.820 Variable Capacity Management: Not Supported 00:27:24.820 Delete Endurance Group: Not Supported 00:27:24.820 Delete NVM Set: Not Supported 00:27:24.820 Extended LBA Formats Supported: Not Supported 00:27:24.821 Flexible Data Placement Supported: Not Supported 00:27:24.821 00:27:24.821 Controller Memory Buffer Support 00:27:24.821 ================================ 00:27:24.821 Supported: No 00:27:24.821 00:27:24.821 Persistent Memory Region Support 00:27:24.821 ================================ 00:27:24.821 Supported: No 00:27:24.821 00:27:24.821 Admin Command Set Attributes 00:27:24.821 ============================ 00:27:24.821 Security Send/Receive: Not Supported 00:27:24.821 Format NVM: Not Supported 00:27:24.821 Firmware Activate/Download: Not Supported 00:27:24.821 Namespace Management: Not Supported 00:27:24.821 Device Self-Test: Not Supported 00:27:24.821 Directives: Not Supported 00:27:24.821 NVMe-MI: Not Supported 00:27:24.821 Virtualization Management: Not Supported 00:27:24.821 Doorbell Buffer Config: Not Supported 00:27:24.821 Get LBA Status Capability: Not Supported 00:27:24.821 Command & Feature Lockdown Capability: Not Supported 00:27:24.821 Abort Command Limit: 1 00:27:24.821 Async Event Request Limit: 1 00:27:24.821 Number of Firmware Slots: N/A 00:27:24.821 Firmware Slot 1 Read-Only: N/A 00:27:24.821 Firmware Activation Without Reset: N/A 00:27:24.821 Multiple Update Detection Support: N/A 00:27:24.821 Firmware Update Granularity: No Information Provided 00:27:24.821 Per-Namespace SMART Log: No 00:27:24.821 Asymmetric Namespace Access Log Page: Not Supported 00:27:24.821 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:24.821 Command Effects Log Page: Not Supported 00:27:24.821 Get Log Page Extended Data: Supported 00:27:24.821 Telemetry Log Pages: Not Supported 00:27:24.821 Persistent Event Log Pages: Not Supported 00:27:24.821 Supported Log Pages Log Page: May Support 00:27:24.821 Commands Supported & Effects Log Page: Not Supported 00:27:24.821 Feature Identifiers & Effects Log Page:May Support 00:27:24.821 NVMe-MI Commands & Effects Log Page: May Support 00:27:24.821 Data Area 4 for Telemetry Log: Not Supported 00:27:24.821 Error Log Page Entries Supported: 1 00:27:24.821 Keep Alive: Not Supported 00:27:24.821 00:27:24.821 NVM Command Set Attributes 00:27:24.821 ========================== 00:27:24.821 Submission Queue Entry Size 00:27:24.821 Max: 1 00:27:24.821 Min: 1 00:27:24.821 Completion Queue Entry Size 00:27:24.821 Max: 1 00:27:24.821 Min: 1 00:27:24.821 Number of Namespaces: 0 00:27:24.821 Compare Command: Not Supported 00:27:24.821 Write Uncorrectable Command: Not Supported 00:27:24.821 Dataset Management Command: Not Supported 00:27:24.821 Write Zeroes Command: Not Supported 00:27:24.821 Set Features Save Field: Not Supported 00:27:24.821 Reservations: Not Supported 00:27:24.821 Timestamp: Not Supported 00:27:24.821 Copy: Not Supported 00:27:24.821 Volatile Write Cache: Not Present 00:27:24.821 Atomic Write Unit (Normal): 1 00:27:24.821 Atomic Write Unit (PFail): 1 00:27:24.821 Atomic Compare & Write Unit: 1 00:27:24.821 Fused Compare & Write: Not Supported 00:27:24.821 Scatter-Gather List 00:27:24.821 SGL Command Set: Supported 00:27:24.821 SGL Keyed: Not Supported 00:27:24.821 SGL Bit Bucket Descriptor: Not Supported 00:27:24.821 SGL Metadata Pointer: Not Supported 00:27:24.821 Oversized SGL: Not Supported 00:27:24.821 SGL Metadata Address: Not Supported 00:27:24.821 SGL Offset: Supported 00:27:24.821 Transport SGL Data Block: Not Supported 00:27:24.821 Replay Protected Memory Block: Not Supported 00:27:24.821 00:27:24.821 Firmware Slot Information 00:27:24.821 ========================= 00:27:24.821 Active slot: 0 00:27:24.821 00:27:24.821 00:27:24.821 Error Log 00:27:24.821 ========= 00:27:24.821 00:27:24.821 Active Namespaces 00:27:24.821 ================= 00:27:24.821 Discovery Log Page 00:27:24.821 ================== 00:27:24.821 Generation Counter: 2 00:27:24.821 Number of Records: 2 00:27:24.821 Record Format: 0 00:27:24.821 00:27:24.821 Discovery Log Entry 0 00:27:24.821 ---------------------- 00:27:24.821 Transport Type: 3 (TCP) 00:27:24.821 Address Family: 1 (IPv4) 00:27:24.821 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:24.821 Entry Flags: 00:27:24.821 Duplicate Returned Information: 0 00:27:24.821 Explicit Persistent Connection Support for Discovery: 0 00:27:24.821 Transport Requirements: 00:27:24.821 Secure Channel: Not Specified 00:27:24.821 Port ID: 1 (0x0001) 00:27:24.821 Controller ID: 65535 (0xffff) 00:27:24.821 Admin Max SQ Size: 32 00:27:24.821 Transport Service Identifier: 4420 00:27:24.821 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:24.821 Transport Address: 10.0.0.1 00:27:24.821 Discovery Log Entry 1 00:27:24.821 ---------------------- 00:27:24.821 Transport Type: 3 (TCP) 00:27:24.821 Address Family: 1 (IPv4) 00:27:24.821 Subsystem Type: 2 (NVM Subsystem) 00:27:24.821 Entry Flags: 00:27:24.821 Duplicate Returned Information: 0 00:27:24.821 Explicit Persistent Connection Support for Discovery: 0 00:27:24.821 Transport Requirements: 00:27:24.821 Secure Channel: Not Specified 00:27:24.821 Port ID: 1 (0x0001) 00:27:24.821 Controller ID: 65535 (0xffff) 00:27:24.821 Admin Max SQ Size: 32 00:27:24.821 Transport Service Identifier: 4420 00:27:24.821 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:24.821 Transport Address: 10.0.0.1 00:27:24.821 14:09:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:24.821 get_feature(0x01) failed 00:27:24.821 get_feature(0x02) failed 00:27:24.821 get_feature(0x04) failed 00:27:24.821 ===================================================== 00:27:24.821 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:24.821 ===================================================== 00:27:24.821 Controller Capabilities/Features 00:27:24.821 ================================ 00:27:24.821 Vendor ID: 0000 00:27:24.821 Subsystem Vendor ID: 0000 00:27:24.821 Serial Number: fc740300eb46312e3a46 00:27:24.821 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:24.821 Firmware Version: 6.8.9-20 00:27:24.821 Recommended Arb Burst: 6 00:27:24.821 IEEE OUI Identifier: 00 00 00 00:27:24.821 Multi-path I/O 00:27:24.821 May have multiple subsystem ports: Yes 00:27:24.821 May have multiple controllers: Yes 00:27:24.821 Associated with SR-IOV VF: No 00:27:24.821 Max Data Transfer Size: Unlimited 00:27:24.821 Max Number of Namespaces: 1024 00:27:24.821 Max Number of I/O Queues: 128 00:27:24.821 NVMe Specification Version (VS): 1.3 00:27:24.821 NVMe Specification Version (Identify): 1.3 00:27:24.821 Maximum Queue Entries: 1024 00:27:24.821 Contiguous Queues Required: No 00:27:24.821 Arbitration Mechanisms Supported 00:27:24.821 Weighted Round Robin: Not Supported 00:27:24.821 Vendor Specific: Not Supported 00:27:24.821 Reset Timeout: 7500 ms 00:27:24.821 Doorbell Stride: 4 bytes 00:27:24.821 NVM Subsystem Reset: Not Supported 00:27:24.821 Command Sets Supported 00:27:24.821 NVM Command Set: Supported 00:27:24.821 Boot Partition: Not Supported 00:27:24.821 Memory Page Size Minimum: 4096 bytes 00:27:24.821 Memory Page Size Maximum: 4096 bytes 00:27:24.821 Persistent Memory Region: Not Supported 00:27:24.821 Optional Asynchronous Events Supported 00:27:24.821 Namespace Attribute Notices: Supported 00:27:24.821 Firmware Activation Notices: Not Supported 00:27:24.821 ANA Change Notices: Supported 00:27:24.821 PLE Aggregate Log Change Notices: Not Supported 00:27:24.821 LBA Status Info Alert Notices: Not Supported 00:27:24.821 EGE Aggregate Log Change Notices: Not Supported 00:27:24.821 Normal NVM Subsystem Shutdown event: Not Supported 00:27:24.821 Zone Descriptor Change Notices: Not Supported 00:27:24.821 Discovery Log Change Notices: Not Supported 00:27:24.821 Controller Attributes 00:27:24.821 128-bit Host Identifier: Supported 00:27:24.821 Non-Operational Permissive Mode: Not Supported 00:27:24.821 NVM Sets: Not Supported 00:27:24.821 Read Recovery Levels: Not Supported 00:27:24.821 Endurance Groups: Not Supported 00:27:24.821 Predictable Latency Mode: Not Supported 00:27:24.821 Traffic Based Keep ALive: Supported 00:27:24.821 Namespace Granularity: Not Supported 00:27:24.821 SQ Associations: Not Supported 00:27:24.821 UUID List: Not Supported 00:27:24.821 Multi-Domain Subsystem: Not Supported 00:27:24.821 Fixed Capacity Management: Not Supported 00:27:24.821 Variable Capacity Management: Not Supported 00:27:24.821 Delete Endurance Group: Not Supported 00:27:24.821 Delete NVM Set: Not Supported 00:27:24.821 Extended LBA Formats Supported: Not Supported 00:27:24.821 Flexible Data Placement Supported: Not Supported 00:27:24.821 00:27:24.821 Controller Memory Buffer Support 00:27:24.821 ================================ 00:27:24.821 Supported: No 00:27:24.821 00:27:24.821 Persistent Memory Region Support 00:27:24.821 ================================ 00:27:24.822 Supported: No 00:27:24.822 00:27:24.822 Admin Command Set Attributes 00:27:24.822 ============================ 00:27:24.822 Security Send/Receive: Not Supported 00:27:24.822 Format NVM: Not Supported 00:27:24.822 Firmware Activate/Download: Not Supported 00:27:24.822 Namespace Management: Not Supported 00:27:24.822 Device Self-Test: Not Supported 00:27:24.822 Directives: Not Supported 00:27:24.822 NVMe-MI: Not Supported 00:27:24.822 Virtualization Management: Not Supported 00:27:24.822 Doorbell Buffer Config: Not Supported 00:27:24.822 Get LBA Status Capability: Not Supported 00:27:24.822 Command & Feature Lockdown Capability: Not Supported 00:27:24.822 Abort Command Limit: 4 00:27:24.822 Async Event Request Limit: 4 00:27:24.822 Number of Firmware Slots: N/A 00:27:24.822 Firmware Slot 1 Read-Only: N/A 00:27:24.822 Firmware Activation Without Reset: N/A 00:27:24.822 Multiple Update Detection Support: N/A 00:27:24.822 Firmware Update Granularity: No Information Provided 00:27:24.822 Per-Namespace SMART Log: Yes 00:27:24.822 Asymmetric Namespace Access Log Page: Supported 00:27:24.822 ANA Transition Time : 10 sec 00:27:24.822 00:27:24.822 Asymmetric Namespace Access Capabilities 00:27:24.822 ANA Optimized State : Supported 00:27:24.822 ANA Non-Optimized State : Supported 00:27:24.822 ANA Inaccessible State : Supported 00:27:24.822 ANA Persistent Loss State : Supported 00:27:24.822 ANA Change State : Supported 00:27:24.822 ANAGRPID is not changed : No 00:27:24.822 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:24.822 00:27:24.822 ANA Group Identifier Maximum : 128 00:27:24.822 Number of ANA Group Identifiers : 128 00:27:24.822 Max Number of Allowed Namespaces : 1024 00:27:24.822 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:24.822 Command Effects Log Page: Supported 00:27:24.822 Get Log Page Extended Data: Supported 00:27:24.822 Telemetry Log Pages: Not Supported 00:27:24.822 Persistent Event Log Pages: Not Supported 00:27:24.822 Supported Log Pages Log Page: May Support 00:27:24.822 Commands Supported & Effects Log Page: Not Supported 00:27:24.822 Feature Identifiers & Effects Log Page:May Support 00:27:24.822 NVMe-MI Commands & Effects Log Page: May Support 00:27:24.822 Data Area 4 for Telemetry Log: Not Supported 00:27:24.822 Error Log Page Entries Supported: 128 00:27:24.822 Keep Alive: Supported 00:27:24.822 Keep Alive Granularity: 1000 ms 00:27:24.822 00:27:24.822 NVM Command Set Attributes 00:27:24.822 ========================== 00:27:24.822 Submission Queue Entry Size 00:27:24.822 Max: 64 00:27:24.822 Min: 64 00:27:24.822 Completion Queue Entry Size 00:27:24.822 Max: 16 00:27:24.822 Min: 16 00:27:24.822 Number of Namespaces: 1024 00:27:24.822 Compare Command: Not Supported 00:27:24.822 Write Uncorrectable Command: Not Supported 00:27:24.822 Dataset Management Command: Supported 00:27:24.822 Write Zeroes Command: Supported 00:27:24.822 Set Features Save Field: Not Supported 00:27:24.822 Reservations: Not Supported 00:27:24.822 Timestamp: Not Supported 00:27:24.822 Copy: Not Supported 00:27:24.822 Volatile Write Cache: Present 00:27:24.822 Atomic Write Unit (Normal): 1 00:27:24.822 Atomic Write Unit (PFail): 1 00:27:24.822 Atomic Compare & Write Unit: 1 00:27:24.822 Fused Compare & Write: Not Supported 00:27:24.822 Scatter-Gather List 00:27:24.822 SGL Command Set: Supported 00:27:24.822 SGL Keyed: Not Supported 00:27:24.822 SGL Bit Bucket Descriptor: Not Supported 00:27:24.822 SGL Metadata Pointer: Not Supported 00:27:24.822 Oversized SGL: Not Supported 00:27:24.822 SGL Metadata Address: Not Supported 00:27:24.822 SGL Offset: Supported 00:27:24.822 Transport SGL Data Block: Not Supported 00:27:24.822 Replay Protected Memory Block: Not Supported 00:27:24.822 00:27:24.822 Firmware Slot Information 00:27:24.822 ========================= 00:27:24.822 Active slot: 0 00:27:24.822 00:27:24.822 Asymmetric Namespace Access 00:27:24.822 =========================== 00:27:24.822 Change Count : 0 00:27:24.822 Number of ANA Group Descriptors : 1 00:27:24.822 ANA Group Descriptor : 0 00:27:24.822 ANA Group ID : 1 00:27:24.822 Number of NSID Values : 1 00:27:24.822 Change Count : 0 00:27:24.822 ANA State : 1 00:27:24.822 Namespace Identifier : 1 00:27:24.822 00:27:24.822 Commands Supported and Effects 00:27:24.822 ============================== 00:27:24.822 Admin Commands 00:27:24.822 -------------- 00:27:24.822 Get Log Page (02h): Supported 00:27:24.822 Identify (06h): Supported 00:27:24.822 Abort (08h): Supported 00:27:24.822 Set Features (09h): Supported 00:27:24.822 Get Features (0Ah): Supported 00:27:24.822 Asynchronous Event Request (0Ch): Supported 00:27:24.822 Keep Alive (18h): Supported 00:27:24.822 I/O Commands 00:27:24.822 ------------ 00:27:24.822 Flush (00h): Supported 00:27:24.822 Write (01h): Supported LBA-Change 00:27:24.822 Read (02h): Supported 00:27:24.822 Write Zeroes (08h): Supported LBA-Change 00:27:24.822 Dataset Management (09h): Supported 00:27:24.822 00:27:24.822 Error Log 00:27:24.822 ========= 00:27:24.822 Entry: 0 00:27:24.822 Error Count: 0x3 00:27:24.822 Submission Queue Id: 0x0 00:27:24.822 Command Id: 0x5 00:27:24.822 Phase Bit: 0 00:27:24.822 Status Code: 0x2 00:27:24.822 Status Code Type: 0x0 00:27:24.822 Do Not Retry: 1 00:27:24.822 Error Location: 0x28 00:27:24.822 LBA: 0x0 00:27:24.822 Namespace: 0x0 00:27:24.822 Vendor Log Page: 0x0 00:27:24.822 ----------- 00:27:24.822 Entry: 1 00:27:24.822 Error Count: 0x2 00:27:24.822 Submission Queue Id: 0x0 00:27:24.822 Command Id: 0x5 00:27:24.822 Phase Bit: 0 00:27:24.822 Status Code: 0x2 00:27:24.822 Status Code Type: 0x0 00:27:24.822 Do Not Retry: 1 00:27:24.822 Error Location: 0x28 00:27:24.822 LBA: 0x0 00:27:24.822 Namespace: 0x0 00:27:24.822 Vendor Log Page: 0x0 00:27:24.822 ----------- 00:27:24.822 Entry: 2 00:27:24.822 Error Count: 0x1 00:27:24.822 Submission Queue Id: 0x0 00:27:24.822 Command Id: 0x4 00:27:24.822 Phase Bit: 0 00:27:24.822 Status Code: 0x2 00:27:24.822 Status Code Type: 0x0 00:27:24.822 Do Not Retry: 1 00:27:24.822 Error Location: 0x28 00:27:24.822 LBA: 0x0 00:27:24.822 Namespace: 0x0 00:27:24.822 Vendor Log Page: 0x0 00:27:24.822 00:27:24.822 Number of Queues 00:27:24.822 ================ 00:27:24.822 Number of I/O Submission Queues: 128 00:27:24.822 Number of I/O Completion Queues: 128 00:27:24.822 00:27:24.822 ZNS Specific Controller Data 00:27:24.822 ============================ 00:27:24.822 Zone Append Size Limit: 0 00:27:24.822 00:27:24.822 00:27:24.822 Active Namespaces 00:27:24.822 ================= 00:27:24.822 get_feature(0x05) failed 00:27:24.822 Namespace ID:1 00:27:24.822 Command Set Identifier: NVM (00h) 00:27:24.822 Deallocate: Supported 00:27:24.822 Deallocated/Unwritten Error: Not Supported 00:27:24.822 Deallocated Read Value: Unknown 00:27:24.822 Deallocate in Write Zeroes: Not Supported 00:27:24.822 Deallocated Guard Field: 0xFFFF 00:27:24.822 Flush: Supported 00:27:24.822 Reservation: Not Supported 00:27:24.822 Namespace Sharing Capabilities: Multiple Controllers 00:27:24.822 Size (in LBAs): 3750748848 (1788GiB) 00:27:24.822 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:24.822 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:24.822 UUID: 6468376e-0fe3-4c8f-a97b-dd3d5494fbff 00:27:24.822 Thin Provisioning: Not Supported 00:27:24.822 Per-NS Atomic Units: Yes 00:27:24.822 Atomic Write Unit (Normal): 8 00:27:24.822 Atomic Write Unit (PFail): 8 00:27:24.822 Preferred Write Granularity: 8 00:27:24.822 Atomic Compare & Write Unit: 8 00:27:24.822 Atomic Boundary Size (Normal): 0 00:27:24.822 Atomic Boundary Size (PFail): 0 00:27:24.822 Atomic Boundary Offset: 0 00:27:24.822 NGUID/EUI64 Never Reused: No 00:27:24.822 ANA group ID: 1 00:27:24.822 Namespace Write Protected: No 00:27:24.822 Number of LBA Formats: 1 00:27:24.822 Current LBA Format: LBA Format #00 00:27:24.822 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:24.822 00:27:24.822 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:24.822 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:24.822 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:24.822 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:24.822 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:24.822 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:24.822 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:24.822 rmmod nvme_tcp 00:27:24.822 rmmod nvme_fabrics 00:27:24.823 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:24.823 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:24.823 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:24.823 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:24.823 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:24.823 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:24.823 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:24.823 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:25.084 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:25.084 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:25.084 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:25.084 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:25.084 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:25.084 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.084 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.084 14:09:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.997 14:09:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:26.997 14:09:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:26.997 14:09:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:26.997 14:09:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:26.997 14:09:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:26.997 14:09:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:26.997 14:09:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:26.997 14:09:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:26.997 14:09:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:26.997 14:09:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:26.997 14:09:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:31.206 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:31.206 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:31.206 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:31.206 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:31.206 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:31.206 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:31.206 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:31.206 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:31.206 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:31.206 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:31.206 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:31.206 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:31.206 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:31.206 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:31.206 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:31.206 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:31.206 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:31.206 00:27:31.206 real 0m20.015s 00:27:31.206 user 0m5.340s 00:27:31.206 sys 0m11.604s 00:27:31.206 14:09:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:31.206 14:09:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:31.206 ************************************ 00:27:31.206 END TEST nvmf_identify_kernel_target 00:27:31.206 ************************************ 00:27:31.206 14:09:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:31.206 14:09:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:31.206 14:09:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:31.206 14:09:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.206 ************************************ 00:27:31.206 START TEST nvmf_auth_host 00:27:31.206 ************************************ 00:27:31.206 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:31.468 * Looking for test storage... 00:27:31.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:31.468 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:31.468 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:31.468 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:31.468 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:31.468 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:31.468 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:31.468 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:31.468 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:31.468 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:31.468 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:31.468 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:31.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.469 --rc genhtml_branch_coverage=1 00:27:31.469 --rc genhtml_function_coverage=1 00:27:31.469 --rc genhtml_legend=1 00:27:31.469 --rc geninfo_all_blocks=1 00:27:31.469 --rc geninfo_unexecuted_blocks=1 00:27:31.469 00:27:31.469 ' 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:31.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.469 --rc genhtml_branch_coverage=1 00:27:31.469 --rc genhtml_function_coverage=1 00:27:31.469 --rc genhtml_legend=1 00:27:31.469 --rc geninfo_all_blocks=1 00:27:31.469 --rc geninfo_unexecuted_blocks=1 00:27:31.469 00:27:31.469 ' 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:31.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.469 --rc genhtml_branch_coverage=1 00:27:31.469 --rc genhtml_function_coverage=1 00:27:31.469 --rc genhtml_legend=1 00:27:31.469 --rc geninfo_all_blocks=1 00:27:31.469 --rc geninfo_unexecuted_blocks=1 00:27:31.469 00:27:31.469 ' 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:31.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.469 --rc genhtml_branch_coverage=1 00:27:31.469 --rc genhtml_function_coverage=1 00:27:31.469 --rc genhtml_legend=1 00:27:31.469 --rc geninfo_all_blocks=1 00:27:31.469 --rc geninfo_unexecuted_blocks=1 00:27:31.469 00:27:31.469 ' 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:31.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.469 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:31.470 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:31.470 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:31.470 14:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.611 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:39.612 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:39.612 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:39.612 Found net devices under 0000:31:00.0: cvl_0_0 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:39.612 Found net devices under 0000:31:00.1: cvl_0_1 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:39.612 14:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:39.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:27:39.612 00:27:39.612 --- 10.0.0.2 ping statistics --- 00:27:39.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.612 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:39.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:27:39.612 00:27:39.612 --- 10.0.0.1 ping statistics --- 00:27:39.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.612 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2565286 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2565286 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 2565286 ']' 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:39.612 14:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d8a8e5ea90b6b34f75503b042e259cb4 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.h4H 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d8a8e5ea90b6b34f75503b042e259cb4 0 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d8a8e5ea90b6b34f75503b042e259cb4 0 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d8a8e5ea90b6b34f75503b042e259cb4 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.h4H 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.h4H 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.h4H 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=04517ee57018f8f7a1df54a795f8e512f9836891ab2d5f991602dd46c71db66d 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.EJ1 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 04517ee57018f8f7a1df54a795f8e512f9836891ab2d5f991602dd46c71db66d 3 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 04517ee57018f8f7a1df54a795f8e512f9836891ab2d5f991602dd46c71db66d 3 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=04517ee57018f8f7a1df54a795f8e512f9836891ab2d5f991602dd46c71db66d 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.EJ1 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.EJ1 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.EJ1 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:40.184 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:40.185 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:40.185 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:40.185 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=39af4c28d34b20eb89f2000c8cb9db436d963e49aa593c07 00:27:40.185 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:40.185 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.AtD 00:27:40.185 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 39af4c28d34b20eb89f2000c8cb9db436d963e49aa593c07 0 00:27:40.185 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 39af4c28d34b20eb89f2000c8cb9db436d963e49aa593c07 0 00:27:40.185 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:40.185 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:40.185 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=39af4c28d34b20eb89f2000c8cb9db436d963e49aa593c07 00:27:40.185 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:40.185 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.AtD 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.AtD 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.AtD 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0a298109d0cdd589916162e1e6e3a00ba06277852a5e4804 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.0Y8 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0a298109d0cdd589916162e1e6e3a00ba06277852a5e4804 2 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0a298109d0cdd589916162e1e6e3a00ba06277852a5e4804 2 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0a298109d0cdd589916162e1e6e3a00ba06277852a5e4804 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.0Y8 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.0Y8 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.0Y8 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7d7a7f3c61e3bfa59ed22c8f38de5f4b 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.eij 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7d7a7f3c61e3bfa59ed22c8f38de5f4b 1 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7d7a7f3c61e3bfa59ed22c8f38de5f4b 1 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7d7a7f3c61e3bfa59ed22c8f38de5f4b 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.eij 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.eij 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.eij 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e8d06549e32d8e1c28ec12a6c16af1a2 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.zu3 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e8d06549e32d8e1c28ec12a6c16af1a2 1 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e8d06549e32d8e1c28ec12a6c16af1a2 1 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e8d06549e32d8e1c28ec12a6c16af1a2 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.zu3 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.zu3 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.zu3 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f22792a6caf3c667710660569bcf37ad6192fad342101f99 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.0Ga 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f22792a6caf3c667710660569bcf37ad6192fad342101f99 2 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f22792a6caf3c667710660569bcf37ad6192fad342101f99 2 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f22792a6caf3c667710660569bcf37ad6192fad342101f99 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:40.447 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.0Ga 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.0Ga 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.0Ga 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3b4d42d773cc0e33deedc82846c9dffd 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.FIa 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3b4d42d773cc0e33deedc82846c9dffd 0 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3b4d42d773cc0e33deedc82846c9dffd 0 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3b4d42d773cc0e33deedc82846c9dffd 00:27:40.708 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.FIa 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.FIa 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.FIa 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=29fd1ee758ae9d4e195f4d049c804b560673871bd397b9d9e996dc2fa5ef1bdd 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.z0r 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 29fd1ee758ae9d4e195f4d049c804b560673871bd397b9d9e996dc2fa5ef1bdd 3 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 29fd1ee758ae9d4e195f4d049c804b560673871bd397b9d9e996dc2fa5ef1bdd 3 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=29fd1ee758ae9d4e195f4d049c804b560673871bd397b9d9e996dc2fa5ef1bdd 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.z0r 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.z0r 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.z0r 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2565286 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 2565286 ']' 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:40.709 14:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.h4H 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.EJ1 ]] 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EJ1 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.AtD 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.0Y8 ]] 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0Y8 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.eij 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.zu3 ]] 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zu3 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.0Ga 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.FIa ]] 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.FIa 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.z0r 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:40.971 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:41.232 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:41.232 14:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:44.536 Waiting for block devices as requested 00:27:44.536 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:44.536 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:44.844 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:44.844 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:44.844 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:44.844 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:45.105 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:45.105 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:45.105 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:45.366 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:45.366 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:45.626 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:45.626 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:45.626 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:45.626 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:45.887 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:45.887 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:46.829 No valid GPT data, bailing 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:46.829 14:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:27:46.829 00:27:46.829 Discovery Log Number of Records 2, Generation counter 2 00:27:46.829 =====Discovery Log Entry 0====== 00:27:46.829 trtype: tcp 00:27:46.829 adrfam: ipv4 00:27:46.829 subtype: current discovery subsystem 00:27:46.829 treq: not specified, sq flow control disable supported 00:27:46.829 portid: 1 00:27:46.829 trsvcid: 4420 00:27:46.829 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:46.829 traddr: 10.0.0.1 00:27:46.829 eflags: none 00:27:46.829 sectype: none 00:27:46.829 =====Discovery Log Entry 1====== 00:27:46.829 trtype: tcp 00:27:46.829 adrfam: ipv4 00:27:46.829 subtype: nvme subsystem 00:27:46.829 treq: not specified, sq flow control disable supported 00:27:46.829 portid: 1 00:27:46.829 trsvcid: 4420 00:27:46.829 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:46.829 traddr: 10.0.0.1 00:27:46.829 eflags: none 00:27:46.829 sectype: none 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: ]] 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.829 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.091 nvme0n1 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: ]] 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.091 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.353 nvme0n1 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: ]] 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.353 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.614 nvme0n1 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: ]] 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.614 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.876 nvme0n1 00:27:47.876 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.876 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.876 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.876 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.876 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.876 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.876 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.876 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.876 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.876 14:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: ]] 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.876 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.137 nvme0n1 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.137 nvme0n1 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.137 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: ]] 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.398 nvme0n1 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.398 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.399 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: ]] 00:27:48.659 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.660 nvme0n1 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.660 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: ]] 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.921 14:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.921 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.921 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.921 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.922 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.922 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.922 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.922 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.922 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.922 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.922 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.922 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.922 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.922 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.922 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.922 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.922 nvme0n1 00:27:48.922 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.922 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.922 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.922 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.922 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: ]] 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.183 nvme0n1 00:27:49.183 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.444 nvme0n1 00:27:49.444 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: ]] 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.705 14:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.966 nvme0n1 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: ]] 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.966 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.227 nvme0n1 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: ]] 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.227 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.488 nvme0n1 00:27:50.488 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.488 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.488 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.488 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.488 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.488 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: ]] 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.748 14:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.009 nvme0n1 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.009 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.010 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.271 nvme0n1 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: ]] 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.271 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.842 nvme0n1 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: ]] 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.842 14:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.414 nvme0n1 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: ]] 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.414 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.415 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.415 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.415 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:52.415 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.415 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.675 nvme0n1 00:27:52.675 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.675 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.675 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.675 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.675 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.675 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.675 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.675 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.675 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.675 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: ]] 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.936 14:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.197 nvme0n1 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.197 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.458 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.458 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.458 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.744 nvme0n1 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: ]] 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.744 14:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.359 nvme0n1 00:27:54.359 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.359 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.359 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.359 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.359 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.359 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: ]] 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.620 14:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.191 nvme0n1 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: ]] 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.191 14:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.132 nvme0n1 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: ]] 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.132 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.703 nvme0n1 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.703 14:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.279 nvme0n1 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: ]] 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.279 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.540 nvme0n1 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: ]] 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.540 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.800 nvme0n1 00:27:57.800 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.800 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.800 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.800 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.800 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.800 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.800 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: ]] 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.801 14:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.801 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.801 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.801 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.801 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.801 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.801 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.801 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.801 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.801 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.801 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.801 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.801 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.801 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:57.801 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.801 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.061 nvme0n1 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: ]] 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.061 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.062 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.062 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.062 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.062 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.062 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:58.062 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.062 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.322 nvme0n1 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.322 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.583 nvme0n1 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: ]] 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.583 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.584 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.584 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.584 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.584 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.584 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.584 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.584 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.584 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.584 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:58.584 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.584 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.844 nvme0n1 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: ]] 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.845 14:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.105 nvme0n1 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: ]] 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.105 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.365 nvme0n1 00:27:59.365 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.365 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.365 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.365 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.365 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.365 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.365 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.365 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.365 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.365 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.365 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.365 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.365 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:59.365 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.365 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.365 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: ]] 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.366 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.626 nvme0n1 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.626 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.887 nvme0n1 00:27:59.887 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.887 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.887 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.887 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.887 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.887 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.887 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.887 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.887 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.887 14:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: ]] 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.887 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.148 nvme0n1 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: ]] 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.148 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.409 nvme0n1 00:28:00.409 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.409 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.409 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.409 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.409 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.409 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.409 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.409 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.409 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.409 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.669 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.669 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.669 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:00.669 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.669 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.669 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.669 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:00.669 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:00.669 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:00.669 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.669 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: ]] 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.670 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.930 nvme0n1 00:28:00.930 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.930 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.931 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.931 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.931 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.931 14:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: ]] 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.931 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.192 nvme0n1 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.192 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.452 nvme0n1 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: ]] 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:01.452 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.453 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.713 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.713 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.713 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.713 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.713 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.713 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.713 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.713 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.713 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.713 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.713 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.713 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.713 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:01.713 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.713 14:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.974 nvme0n1 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: ]] 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.974 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.545 nvme0n1 00:28:02.545 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.545 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.545 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.545 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.545 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.545 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.545 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.545 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.545 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.545 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.545 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.545 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.545 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:02.545 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.545 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.545 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.545 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:02.545 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: ]] 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.546 14:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.116 nvme0n1 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: ]] 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.116 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.117 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.117 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.117 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:03.117 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.117 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.377 nvme0n1 00:28:03.377 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.377 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.377 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.377 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.377 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.638 14:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.898 nvme0n1 00:28:03.898 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.898 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.898 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.898 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.898 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.898 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: ]] 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.158 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.729 nvme0n1 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: ]] 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.729 14:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.671 nvme0n1 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: ]] 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.671 14:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.242 nvme0n1 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: ]] 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.242 14:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.813 nvme0n1 00:28:06.813 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.813 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.813 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.813 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.813 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.813 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.073 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.645 nvme0n1 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: ]] 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.645 14:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.906 nvme0n1 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: ]] 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.906 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.167 nvme0n1 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: ]] 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.167 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.428 nvme0n1 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: ]] 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.428 nvme0n1 00:28:08.428 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:08.689 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.690 nvme0n1 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.690 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.950 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.950 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.950 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.950 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.950 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.950 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:08.950 14:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: ]] 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.950 nvme0n1 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.950 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.951 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.951 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.951 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.951 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: ]] 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.210 nvme0n1 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.210 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: ]] 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.471 nvme0n1 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.471 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.731 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.731 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.731 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.731 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.731 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.731 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.731 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:09.731 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.731 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.731 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:09.731 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:09.731 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:28:09.731 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:28:09.731 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.731 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:09.731 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:28:09.731 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: ]] 00:28:09.731 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.732 nvme0n1 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.732 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.732 14:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.732 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.993 nvme0n1 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.993 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: ]] 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.253 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.254 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.254 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.254 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.254 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.254 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.254 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.254 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.254 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.254 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.254 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.254 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.254 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:10.254 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.254 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.515 nvme0n1 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: ]] 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.515 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.776 nvme0n1 00:28:10.776 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.776 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.776 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.776 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.776 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.776 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.776 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.776 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.776 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.776 14:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: ]] 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.776 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.036 nvme0n1 00:28:11.036 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.036 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.036 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.036 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.036 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.036 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: ]] 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.297 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.558 nvme0n1 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.558 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.819 nvme0n1 00:28:11.819 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.819 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.819 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.819 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.819 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.819 14:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: ]] 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.819 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.395 nvme0n1 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: ]] 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.395 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.396 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.396 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.396 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.396 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.396 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.396 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.396 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.396 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.396 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.396 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.396 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.396 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.396 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:12.396 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.396 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.968 nvme0n1 00:28:12.968 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.968 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.968 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.968 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.968 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.968 14:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: ]] 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.968 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.228 nvme0n1 00:28:13.228 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.228 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.228 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.228 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.228 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.228 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: ]] 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.489 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.749 nvme0n1 00:28:13.749 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.749 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.749 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.749 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.749 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.749 14:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.749 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.749 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.749 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.749 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.749 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.749 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.749 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.010 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.271 nvme0n1 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:14.271 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhhOGU1ZWE5MGI2YjM0Zjc1NTAzYjA0MmUyNTljYjRQp4t3: 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: ]] 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQ1MTdlZTU3MDE4ZjhmN2ExZGY1NGE3OTVmOGU1MTJmOTgzNjg5MWFiMmQ1Zjk5MTYwMmRkNDZjNzFkYjY2ZP5jz6o=: 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.272 14:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.212 nvme0n1 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: ]] 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.212 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.781 nvme0n1 00:28:15.781 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.781 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.781 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.781 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.781 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.781 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.781 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.781 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.781 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.781 14:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: ]] 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.782 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.721 nvme0n1 00:28:16.721 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.721 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.721 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.721 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.721 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.721 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.721 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.721 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.721 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.721 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.721 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.721 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.721 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:16.721 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.721 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.721 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIyNzkyYTZjYWYzYzY2NzcxMDY2MDU2OWJjZjM3YWQ2MTkyZmFkMzQyMTAxZjk57stDtA==: 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: ]] 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2I0ZDQyZDc3M2NjMGUzM2RlZWRjODI4NDZjOWRmZmQaR2fb: 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.722 14:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.291 nvme0n1 00:28:17.291 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.291 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.291 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.291 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.291 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.291 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.291 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.291 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.291 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.291 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.291 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.291 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.291 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:17.291 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.291 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.291 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:17.291 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:17.291 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjlmZDFlZTc1OGFlOWQ0ZTE5NWY0ZDA0OWM4MDRiNTYwNjczODcxYmQzOTdiOWQ5ZTk5NmRjMmZhNWVmMWJkZINE+OY=: 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.292 14:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.862 nvme0n1 00:28:17.862 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.862 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.862 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.862 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.862 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.862 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: ]] 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.123 request: 00:28:18.123 { 00:28:18.123 "name": "nvme0", 00:28:18.123 "trtype": "tcp", 00:28:18.123 "traddr": "10.0.0.1", 00:28:18.123 "adrfam": "ipv4", 00:28:18.123 "trsvcid": "4420", 00:28:18.123 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:18.123 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:18.123 "prchk_reftag": false, 00:28:18.123 "prchk_guard": false, 00:28:18.123 "hdgst": false, 00:28:18.123 "ddgst": false, 00:28:18.123 "allow_unrecognized_csi": false, 00:28:18.123 "method": "bdev_nvme_attach_controller", 00:28:18.123 "req_id": 1 00:28:18.123 } 00:28:18.123 Got JSON-RPC error response 00:28:18.123 response: 00:28:18.123 { 00:28:18.123 "code": -5, 00:28:18.123 "message": "Input/output error" 00:28:18.123 } 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.123 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.124 request: 00:28:18.124 { 00:28:18.124 "name": "nvme0", 00:28:18.124 "trtype": "tcp", 00:28:18.124 "traddr": "10.0.0.1", 00:28:18.124 "adrfam": "ipv4", 00:28:18.124 "trsvcid": "4420", 00:28:18.124 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:18.124 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:18.124 "prchk_reftag": false, 00:28:18.124 "prchk_guard": false, 00:28:18.124 "hdgst": false, 00:28:18.124 "ddgst": false, 00:28:18.124 "dhchap_key": "key2", 00:28:18.124 "allow_unrecognized_csi": false, 00:28:18.124 "method": "bdev_nvme_attach_controller", 00:28:18.124 "req_id": 1 00:28:18.124 } 00:28:18.124 Got JSON-RPC error response 00:28:18.124 response: 00:28:18.124 { 00:28:18.124 "code": -5, 00:28:18.124 "message": "Input/output error" 00:28:18.124 } 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:18.124 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.384 request: 00:28:18.384 { 00:28:18.384 "name": "nvme0", 00:28:18.384 "trtype": "tcp", 00:28:18.384 "traddr": "10.0.0.1", 00:28:18.384 "adrfam": "ipv4", 00:28:18.384 "trsvcid": "4420", 00:28:18.384 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:18.384 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:18.384 "prchk_reftag": false, 00:28:18.384 "prchk_guard": false, 00:28:18.384 "hdgst": false, 00:28:18.384 "ddgst": false, 00:28:18.384 "dhchap_key": "key1", 00:28:18.384 "dhchap_ctrlr_key": "ckey2", 00:28:18.384 "allow_unrecognized_csi": false, 00:28:18.384 "method": "bdev_nvme_attach_controller", 00:28:18.384 "req_id": 1 00:28:18.384 } 00:28:18.384 Got JSON-RPC error response 00:28:18.384 response: 00:28:18.384 { 00:28:18.384 "code": -5, 00:28:18.384 "message": "Input/output error" 00:28:18.384 } 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.384 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.645 nvme0n1 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: ]] 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:18.645 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:18.646 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:18.646 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:18.646 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:18.646 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.646 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.646 request: 00:28:18.646 { 00:28:18.646 "name": "nvme0", 00:28:18.646 "dhchap_key": "key1", 00:28:18.646 "dhchap_ctrlr_key": "ckey2", 00:28:18.646 "method": "bdev_nvme_set_keys", 00:28:18.646 "req_id": 1 00:28:18.646 } 00:28:18.646 Got JSON-RPC error response 00:28:18.646 response: 00:28:18.646 { 00:28:18.646 "code": -13, 00:28:18.646 "message": "Permission denied" 00:28:18.646 } 00:28:18.646 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:18.646 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:18.646 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:18.646 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:18.646 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:18.646 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.646 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:18.646 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.646 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.646 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.646 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:18.646 14:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:20.026 14:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.026 14:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:20.026 14:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.026 14:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.026 14:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.026 14:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:20.026 14:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:20.967 14:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.967 14:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:20.967 14:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.967 14:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.967 14:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzlhZjRjMjhkMzRiMjBlYjg5ZjIwMDBjOGNiOWRiNDM2ZDk2M2U0OWFhNTkzYzA3WUFmRw==: 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: ]] 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEyOTgxMDlkMGNkZDU4OTkxNjE2MmUxZTZlM2EwMGJhMDYyNzc4NTJhNWU0ODA0TrcezA==: 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.967 nvme0n1 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q3YTdmM2M2MWUzYmZhNTllZDIyYzhmMzhkZTVmNGJlgx3q: 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: ]] 00:28:20.967 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThkMDY1NDllMzJkOGUxYzI4ZWMxMmE2YzE2YWYxYTKAQEn1: 00:28:20.968 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:20.968 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:20.968 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:20.968 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:20.968 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:20.968 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:20.968 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:20.968 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:20.968 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.968 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.968 request: 00:28:20.968 { 00:28:20.968 "name": "nvme0", 00:28:20.968 "dhchap_key": "key2", 00:28:20.968 "dhchap_ctrlr_key": "ckey1", 00:28:20.968 "method": "bdev_nvme_set_keys", 00:28:20.968 "req_id": 1 00:28:20.968 } 00:28:20.968 Got JSON-RPC error response 00:28:20.968 response: 00:28:20.968 { 00:28:20.968 "code": -13, 00:28:20.968 "message": "Permission denied" 00:28:20.968 } 00:28:20.968 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:20.968 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:20.968 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:20.968 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:20.968 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:21.228 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.228 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:21.228 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.228 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.228 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.228 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:21.228 14:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:22.169 rmmod nvme_tcp 00:28:22.169 rmmod nvme_fabrics 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2565286 ']' 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2565286 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 2565286 ']' 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 2565286 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:22.169 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2565286 00:28:22.428 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:22.428 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:22.428 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2565286' 00:28:22.428 killing process with pid 2565286 00:28:22.428 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 2565286 00:28:22.428 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 2565286 00:28:22.428 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:22.428 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:22.428 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:22.428 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:22.428 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:22.428 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:22.428 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:22.428 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:22.428 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:22.428 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.428 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.428 14:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.510 14:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:24.510 14:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:24.510 14:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:24.510 14:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:24.510 14:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:24.510 14:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:24.510 14:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:24.510 14:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:24.510 14:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:24.510 14:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:24.510 14:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:24.510 14:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:24.510 14:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:28.713 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:28.713 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:28.713 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:28.713 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:28.713 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:28.713 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:28.713 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:28.713 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:28.713 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:28.713 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:28.713 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:28.713 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:28.713 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:28.713 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:28.713 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:28.713 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:28.713 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:28.713 14:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.h4H /tmp/spdk.key-null.AtD /tmp/spdk.key-sha256.eij /tmp/spdk.key-sha384.0Ga /tmp/spdk.key-sha512.z0r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:28.713 14:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:32.921 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:32.921 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:32.921 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:32.921 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:32.921 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:32.921 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:32.921 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:32.921 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:32.921 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:32.921 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:32.921 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:32.921 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:32.921 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:32.921 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:32.921 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:32.921 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:32.921 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:32.921 00:28:32.921 real 1m1.277s 00:28:32.921 user 0m54.786s 00:28:32.921 sys 0m16.400s 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.921 ************************************ 00:28:32.921 END TEST nvmf_auth_host 00:28:32.921 ************************************ 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.921 ************************************ 00:28:32.921 START TEST nvmf_digest 00:28:32.921 ************************************ 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:32.921 * Looking for test storage... 00:28:32.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:32.921 14:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:32.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.921 --rc genhtml_branch_coverage=1 00:28:32.921 --rc genhtml_function_coverage=1 00:28:32.921 --rc genhtml_legend=1 00:28:32.921 --rc geninfo_all_blocks=1 00:28:32.921 --rc geninfo_unexecuted_blocks=1 00:28:32.921 00:28:32.921 ' 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:32.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.921 --rc genhtml_branch_coverage=1 00:28:32.921 --rc genhtml_function_coverage=1 00:28:32.921 --rc genhtml_legend=1 00:28:32.921 --rc geninfo_all_blocks=1 00:28:32.921 --rc geninfo_unexecuted_blocks=1 00:28:32.921 00:28:32.921 ' 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:32.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.921 --rc genhtml_branch_coverage=1 00:28:32.921 --rc genhtml_function_coverage=1 00:28:32.921 --rc genhtml_legend=1 00:28:32.921 --rc geninfo_all_blocks=1 00:28:32.921 --rc geninfo_unexecuted_blocks=1 00:28:32.921 00:28:32.921 ' 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:32.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.921 --rc genhtml_branch_coverage=1 00:28:32.921 --rc genhtml_function_coverage=1 00:28:32.921 --rc genhtml_legend=1 00:28:32.921 --rc geninfo_all_blocks=1 00:28:32.921 --rc geninfo_unexecuted_blocks=1 00:28:32.921 00:28:32.921 ' 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:32.921 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:32.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:32.922 14:10:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.062 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:41.063 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:41.063 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:41.063 Found net devices under 0000:31:00.0: cvl_0_0 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:41.063 Found net devices under 0000:31:00.1: cvl_0_1 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:41.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:28:41.063 00:28:41.063 --- 10.0.0.2 ping statistics --- 00:28:41.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.063 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:28:41.063 00:28:41.063 --- 10.0.0.1 ping statistics --- 00:28:41.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.063 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.063 ************************************ 00:28:41.063 START TEST nvmf_digest_clean 00:28:41.063 ************************************ 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.063 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2582353 00:28:41.064 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2582353 00:28:41.064 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:41.064 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2582353 ']' 00:28:41.064 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.064 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:41.064 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.064 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:41.064 14:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.064 [2024-11-06 14:10:26.728488] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:28:41.064 [2024-11-06 14:10:26.728552] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.064 [2024-11-06 14:10:26.829970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.064 [2024-11-06 14:10:26.881130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.064 [2024-11-06 14:10:26.881181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.064 [2024-11-06 14:10:26.881190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.064 [2024-11-06 14:10:26.881197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.064 [2024-11-06 14:10:26.881204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.064 [2024-11-06 14:10:26.882016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.325 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:41.325 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:41.325 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:41.325 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:41.325 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.325 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.325 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:41.325 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:41.325 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:41.325 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.325 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.586 null0 00:28:41.586 [2024-11-06 14:10:27.698811] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.586 [2024-11-06 14:10:27.723112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.586 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.586 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:41.586 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:41.586 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:41.586 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:41.586 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:41.586 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:41.586 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:41.586 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2582400 00:28:41.586 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2582400 /var/tmp/bperf.sock 00:28:41.586 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2582400 ']' 00:28:41.586 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:41.586 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:41.586 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:41.586 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:41.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:41.586 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:41.586 14:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.586 [2024-11-06 14:10:27.783433] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:28:41.586 [2024-11-06 14:10:27.783497] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2582400 ] 00:28:41.846 [2024-11-06 14:10:27.878093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.846 [2024-11-06 14:10:27.931078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.417 14:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:42.417 14:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:42.417 14:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:42.417 14:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:42.417 14:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:42.678 14:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.678 14:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.939 nvme0n1 00:28:42.939 14:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:42.939 14:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:43.199 Running I/O for 2 seconds... 00:28:45.096 19062.00 IOPS, 74.46 MiB/s [2024-11-06T13:10:31.377Z] 19971.00 IOPS, 78.01 MiB/s 00:28:45.097 Latency(us) 00:28:45.097 [2024-11-06T13:10:31.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.097 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:45.097 nvme0n1 : 2.00 20001.68 78.13 0.00 0.00 6392.86 2389.33 17148.59 00:28:45.097 [2024-11-06T13:10:31.377Z] =================================================================================================================== 00:28:45.097 [2024-11-06T13:10:31.377Z] Total : 20001.68 78.13 0.00 0.00 6392.86 2389.33 17148.59 00:28:45.097 { 00:28:45.097 "results": [ 00:28:45.097 { 00:28:45.097 "job": "nvme0n1", 00:28:45.097 "core_mask": "0x2", 00:28:45.097 "workload": "randread", 00:28:45.097 "status": "finished", 00:28:45.097 "queue_depth": 128, 00:28:45.097 "io_size": 4096, 00:28:45.097 "runtime": 2.003332, 00:28:45.097 "iops": 20001.677205775177, 00:28:45.097 "mibps": 78.13155158505928, 00:28:45.097 "io_failed": 0, 00:28:45.097 "io_timeout": 0, 00:28:45.097 "avg_latency_us": 6392.856958988436, 00:28:45.097 "min_latency_us": 2389.3333333333335, 00:28:45.097 "max_latency_us": 17148.586666666666 00:28:45.097 } 00:28:45.097 ], 00:28:45.097 "core_count": 1 00:28:45.097 } 00:28:45.097 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:45.097 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:45.097 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:45.097 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:45.097 | select(.opcode=="crc32c") 00:28:45.097 | "\(.module_name) \(.executed)"' 00:28:45.097 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:45.358 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:45.358 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:45.358 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:45.358 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:45.358 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2582400 00:28:45.358 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2582400 ']' 00:28:45.358 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2582400 00:28:45.358 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:45.358 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:45.358 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2582400 00:28:45.358 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:45.358 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:45.358 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2582400' 00:28:45.358 killing process with pid 2582400 00:28:45.358 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2582400 00:28:45.358 Received shutdown signal, test time was about 2.000000 seconds 00:28:45.358 00:28:45.358 Latency(us) 00:28:45.358 [2024-11-06T13:10:31.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.358 [2024-11-06T13:10:31.638Z] =================================================================================================================== 00:28:45.358 [2024-11-06T13:10:31.638Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:45.358 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2582400 00:28:45.619 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:45.619 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:45.619 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:45.619 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:45.619 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:45.619 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:45.619 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:45.619 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2583196 00:28:45.619 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2583196 /var/tmp/bperf.sock 00:28:45.619 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2583196 ']' 00:28:45.619 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:45.619 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:45.619 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:45.619 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:45.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:45.619 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:45.619 14:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:45.619 [2024-11-06 14:10:31.699541] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:28:45.619 [2024-11-06 14:10:31.699597] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2583196 ] 00:28:45.619 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:45.619 Zero copy mechanism will not be used. 00:28:45.619 [2024-11-06 14:10:31.783809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.619 [2024-11-06 14:10:31.813258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.560 14:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:46.560 14:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:46.560 14:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:46.561 14:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:46.561 14:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:46.561 14:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:46.561 14:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:46.821 nvme0n1 00:28:46.821 14:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:46.821 14:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:47.081 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:47.081 Zero copy mechanism will not be used. 00:28:47.081 Running I/O for 2 seconds... 00:28:48.963 3853.00 IOPS, 481.62 MiB/s [2024-11-06T13:10:35.243Z] 3441.00 IOPS, 430.12 MiB/s 00:28:48.963 Latency(us) 00:28:48.963 [2024-11-06T13:10:35.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.963 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:48.963 nvme0n1 : 2.00 3442.19 430.27 0.00 0.00 4645.07 843.09 9284.27 00:28:48.963 [2024-11-06T13:10:35.243Z] =================================================================================================================== 00:28:48.963 [2024-11-06T13:10:35.243Z] Total : 3442.19 430.27 0.00 0.00 4645.07 843.09 9284.27 00:28:48.963 { 00:28:48.963 "results": [ 00:28:48.963 { 00:28:48.963 "job": "nvme0n1", 00:28:48.963 "core_mask": "0x2", 00:28:48.963 "workload": "randread", 00:28:48.963 "status": "finished", 00:28:48.963 "queue_depth": 16, 00:28:48.963 "io_size": 131072, 00:28:48.963 "runtime": 2.003959, 00:28:48.963 "iops": 3442.1861924320806, 00:28:48.963 "mibps": 430.2732740540101, 00:28:48.963 "io_failed": 0, 00:28:48.963 "io_timeout": 0, 00:28:48.963 "avg_latency_us": 4645.07014593602, 00:28:48.963 "min_latency_us": 843.0933333333334, 00:28:48.963 "max_latency_us": 9284.266666666666 00:28:48.963 } 00:28:48.963 ], 00:28:48.963 "core_count": 1 00:28:48.963 } 00:28:48.963 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:48.963 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:48.963 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:48.963 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:48.963 | select(.opcode=="crc32c") 00:28:48.963 | "\(.module_name) \(.executed)"' 00:28:48.963 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:49.223 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:49.223 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:49.223 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:49.223 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:49.223 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2583196 00:28:49.223 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2583196 ']' 00:28:49.223 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2583196 00:28:49.223 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:49.224 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:49.224 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2583196 00:28:49.224 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:49.224 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:49.224 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2583196' 00:28:49.224 killing process with pid 2583196 00:28:49.224 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2583196 00:28:49.224 Received shutdown signal, test time was about 2.000000 seconds 00:28:49.224 00:28:49.224 Latency(us) 00:28:49.224 [2024-11-06T13:10:35.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.224 [2024-11-06T13:10:35.504Z] =================================================================================================================== 00:28:49.224 [2024-11-06T13:10:35.504Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:49.224 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2583196 00:28:49.224 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:49.224 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:49.485 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:49.485 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:49.485 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:49.485 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:49.485 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:49.485 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2584003 00:28:49.485 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2584003 /var/tmp/bperf.sock 00:28:49.485 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2584003 ']' 00:28:49.485 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:49.485 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:49.485 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:49.485 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:49.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:49.485 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:49.485 14:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:49.485 [2024-11-06 14:10:35.560095] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:28:49.485 [2024-11-06 14:10:35.560167] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2584003 ] 00:28:49.485 [2024-11-06 14:10:35.643756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.485 [2024-11-06 14:10:35.673474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.426 14:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:50.426 14:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:50.426 14:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:50.426 14:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:50.426 14:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:50.426 14:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.426 14:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.686 nvme0n1 00:28:50.686 14:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:50.686 14:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:50.947 Running I/O for 2 seconds... 00:28:52.827 30233.00 IOPS, 118.10 MiB/s [2024-11-06T13:10:39.107Z] 30356.50 IOPS, 118.58 MiB/s 00:28:52.827 Latency(us) 00:28:52.827 [2024-11-06T13:10:39.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.827 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:52.827 nvme0n1 : 2.00 30373.47 118.65 0.00 0.00 4209.49 2116.27 12724.91 00:28:52.827 [2024-11-06T13:10:39.107Z] =================================================================================================================== 00:28:52.827 [2024-11-06T13:10:39.107Z] Total : 30373.47 118.65 0.00 0.00 4209.49 2116.27 12724.91 00:28:52.827 { 00:28:52.827 "results": [ 00:28:52.827 { 00:28:52.827 "job": "nvme0n1", 00:28:52.827 "core_mask": "0x2", 00:28:52.827 "workload": "randwrite", 00:28:52.827 "status": "finished", 00:28:52.827 "queue_depth": 128, 00:28:52.827 "io_size": 4096, 00:28:52.827 "runtime": 2.002537, 00:28:52.827 "iops": 30373.471251717197, 00:28:52.827 "mibps": 118.6463720770203, 00:28:52.827 "io_failed": 0, 00:28:52.827 "io_timeout": 0, 00:28:52.827 "avg_latency_us": 4209.494680170108, 00:28:52.827 "min_latency_us": 2116.266666666667, 00:28:52.827 "max_latency_us": 12724.906666666666 00:28:52.827 } 00:28:52.827 ], 00:28:52.827 "core_count": 1 00:28:52.827 } 00:28:52.827 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:52.827 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:52.827 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:52.827 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:52.827 | select(.opcode=="crc32c") 00:28:52.827 | "\(.module_name) \(.executed)"' 00:28:52.827 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:53.087 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:53.087 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:53.087 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:53.087 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:53.087 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2584003 00:28:53.088 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2584003 ']' 00:28:53.088 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2584003 00:28:53.088 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:53.088 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:53.088 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2584003 00:28:53.088 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:53.088 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:53.088 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2584003' 00:28:53.088 killing process with pid 2584003 00:28:53.088 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2584003 00:28:53.088 Received shutdown signal, test time was about 2.000000 seconds 00:28:53.088 00:28:53.088 Latency(us) 00:28:53.088 [2024-11-06T13:10:39.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.088 [2024-11-06T13:10:39.368Z] =================================================================================================================== 00:28:53.088 [2024-11-06T13:10:39.368Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:53.088 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2584003 00:28:53.348 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:53.348 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:53.348 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:53.348 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:53.348 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:53.348 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:53.348 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:53.348 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2584765 00:28:53.348 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2584765 /var/tmp/bperf.sock 00:28:53.348 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2584765 ']' 00:28:53.348 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:53.348 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:53.348 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:53.348 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:53.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:53.348 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:53.348 14:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:53.348 [2024-11-06 14:10:39.482259] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:28:53.348 [2024-11-06 14:10:39.482315] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2584765 ] 00:28:53.348 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:53.348 Zero copy mechanism will not be used. 00:28:53.348 [2024-11-06 14:10:39.565254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.348 [2024-11-06 14:10:39.594578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.288 14:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:54.288 14:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:54.288 14:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:54.288 14:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:54.288 14:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:54.288 14:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.288 14:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.860 nvme0n1 00:28:54.860 14:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:54.860 14:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:54.860 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:54.860 Zero copy mechanism will not be used. 00:28:54.860 Running I/O for 2 seconds... 00:28:56.746 6752.00 IOPS, 844.00 MiB/s [2024-11-06T13:10:43.026Z] 6443.00 IOPS, 805.38 MiB/s 00:28:56.746 Latency(us) 00:28:56.746 [2024-11-06T13:10:43.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.747 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:56.747 nvme0n1 : 2.01 6434.56 804.32 0.00 0.00 2481.10 1235.63 5707.09 00:28:56.747 [2024-11-06T13:10:43.027Z] =================================================================================================================== 00:28:56.747 [2024-11-06T13:10:43.027Z] Total : 6434.56 804.32 0.00 0.00 2481.10 1235.63 5707.09 00:28:56.747 { 00:28:56.747 "results": [ 00:28:56.747 { 00:28:56.747 "job": "nvme0n1", 00:28:56.747 "core_mask": "0x2", 00:28:56.747 "workload": "randwrite", 00:28:56.747 "status": "finished", 00:28:56.747 "queue_depth": 16, 00:28:56.747 "io_size": 131072, 00:28:56.747 "runtime": 2.005576, 00:28:56.747 "iops": 6434.560445478008, 00:28:56.747 "mibps": 804.320055684751, 00:28:56.747 "io_failed": 0, 00:28:56.747 "io_timeout": 0, 00:28:56.747 "avg_latency_us": 2481.098174867622, 00:28:56.747 "min_latency_us": 1235.6266666666668, 00:28:56.747 "max_latency_us": 5707.093333333333 00:28:56.747 } 00:28:56.747 ], 00:28:56.747 "core_count": 1 00:28:56.747 } 00:28:56.747 14:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:56.747 14:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:56.747 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:56.747 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:56.747 | select(.opcode=="crc32c") 00:28:56.747 | "\(.module_name) \(.executed)"' 00:28:56.747 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:57.008 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:57.008 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:57.008 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:57.008 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:57.008 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2584765 00:28:57.008 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2584765 ']' 00:28:57.008 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2584765 00:28:57.008 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:57.008 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:57.008 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2584765 00:28:57.008 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:57.008 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:57.008 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2584765' 00:28:57.008 killing process with pid 2584765 00:28:57.008 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2584765 00:28:57.008 Received shutdown signal, test time was about 2.000000 seconds 00:28:57.008 00:28:57.008 Latency(us) 00:28:57.008 [2024-11-06T13:10:43.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.008 [2024-11-06T13:10:43.288Z] =================================================================================================================== 00:28:57.008 [2024-11-06T13:10:43.288Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:57.008 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2584765 00:28:57.268 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2582353 00:28:57.268 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2582353 ']' 00:28:57.268 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2582353 00:28:57.268 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:57.268 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:57.268 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2582353 00:28:57.268 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:57.268 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:57.268 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2582353' 00:28:57.268 killing process with pid 2582353 00:28:57.268 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2582353 00:28:57.268 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2582353 00:28:57.268 00:28:57.268 real 0m16.863s 00:28:57.268 user 0m33.151s 00:28:57.268 sys 0m3.886s 00:28:57.268 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:57.268 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:57.268 ************************************ 00:28:57.268 END TEST nvmf_digest_clean 00:28:57.268 ************************************ 00:28:57.529 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:57.529 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:57.529 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:57.529 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:57.529 ************************************ 00:28:57.529 START TEST nvmf_digest_error 00:28:57.529 ************************************ 00:28:57.529 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:28:57.529 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:57.529 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:57.529 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:57.529 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.529 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2585476 00:28:57.529 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2585476 00:28:57.529 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:57.529 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2585476 ']' 00:28:57.529 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.529 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:57.529 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.529 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:57.529 14:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.529 [2024-11-06 14:10:43.651799] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:28:57.529 [2024-11-06 14:10:43.651852] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:57.529 [2024-11-06 14:10:43.745856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.529 [2024-11-06 14:10:43.778819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.529 [2024-11-06 14:10:43.778848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.529 [2024-11-06 14:10:43.778854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.529 [2024-11-06 14:10:43.778858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.529 [2024-11-06 14:10:43.778862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.529 [2024-11-06 14:10:43.779335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.472 [2024-11-06 14:10:44.485278] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.472 null0 00:28:58.472 [2024-11-06 14:10:44.564789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.472 [2024-11-06 14:10:44.588981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2585823 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2585823 /var/tmp/bperf.sock 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2585823 ']' 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:58.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:58.472 14:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.472 [2024-11-06 14:10:44.644780] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:28:58.472 [2024-11-06 14:10:44.644828] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2585823 ] 00:28:58.472 [2024-11-06 14:10:44.729984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.733 [2024-11-06 14:10:44.759868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.303 14:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:59.303 14:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:59.303 14:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:59.304 14:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:59.564 14:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:59.564 14:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.564 14:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:59.564 14:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.564 14:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:59.564 14:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:59.825 nvme0n1 00:28:59.825 14:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:59.825 14:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.825 14:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:59.825 14:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.825 14:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:59.825 14:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:59.825 Running I/O for 2 seconds... 00:28:59.825 [2024-11-06 14:10:46.082313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:28:59.825 [2024-11-06 14:10:46.082344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.825 [2024-11-06 14:10:46.082353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.825 [2024-11-06 14:10:46.095068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:28:59.825 [2024-11-06 14:10:46.095089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.825 [2024-11-06 14:10:46.095097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.087 [2024-11-06 14:10:46.107409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.087 [2024-11-06 14:10:46.107429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.087 [2024-11-06 14:10:46.107436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.087 [2024-11-06 14:10:46.118629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.087 [2024-11-06 14:10:46.118648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.087 [2024-11-06 14:10:46.118655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.087 [2024-11-06 14:10:46.126362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.087 [2024-11-06 14:10:46.126381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.087 [2024-11-06 14:10:46.126388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.087 [2024-11-06 14:10:46.137877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.087 [2024-11-06 14:10:46.137897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.087 [2024-11-06 14:10:46.137903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.087 [2024-11-06 14:10:46.147713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.087 [2024-11-06 14:10:46.147732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.087 [2024-11-06 14:10:46.147744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.087 [2024-11-06 14:10:46.158248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.087 [2024-11-06 14:10:46.158266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.087 [2024-11-06 14:10:46.158273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.087 [2024-11-06 14:10:46.169111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.087 [2024-11-06 14:10:46.169128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.087 [2024-11-06 14:10:46.169135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.087 [2024-11-06 14:10:46.176859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.087 [2024-11-06 14:10:46.176877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.087 [2024-11-06 14:10:46.176883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.087 [2024-11-06 14:10:46.186016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.087 [2024-11-06 14:10:46.186034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.087 [2024-11-06 14:10:46.186041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.087 [2024-11-06 14:10:46.195961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.087 [2024-11-06 14:10:46.195979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.087 [2024-11-06 14:10:46.195986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.087 [2024-11-06 14:10:46.204359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.087 [2024-11-06 14:10:46.204377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.087 [2024-11-06 14:10:46.204384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.087 [2024-11-06 14:10:46.212843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.087 [2024-11-06 14:10:46.212860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.087 [2024-11-06 14:10:46.212867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.087 [2024-11-06 14:10:46.221895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.087 [2024-11-06 14:10:46.221914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.087 [2024-11-06 14:10:46.221920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.087 [2024-11-06 14:10:46.230375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.087 [2024-11-06 14:10:46.230396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.087 [2024-11-06 14:10:46.230403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.087 [2024-11-06 14:10:46.238957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.087 [2024-11-06 14:10:46.238975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.087 [2024-11-06 14:10:46.238981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.087 [2024-11-06 14:10:46.247795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.087 [2024-11-06 14:10:46.247813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.087 [2024-11-06 14:10:46.247819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.087 [2024-11-06 14:10:46.258514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.087 [2024-11-06 14:10:46.258533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.088 [2024-11-06 14:10:46.258540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.088 [2024-11-06 14:10:46.267556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.088 [2024-11-06 14:10:46.267574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.088 [2024-11-06 14:10:46.267580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.088 [2024-11-06 14:10:46.277696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.088 [2024-11-06 14:10:46.277714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.088 [2024-11-06 14:10:46.277720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.088 [2024-11-06 14:10:46.285981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.088 [2024-11-06 14:10:46.285998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.088 [2024-11-06 14:10:46.286004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.088 [2024-11-06 14:10:46.296325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.088 [2024-11-06 14:10:46.296343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.088 [2024-11-06 14:10:46.296350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.088 [2024-11-06 14:10:46.305519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.088 [2024-11-06 14:10:46.305537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.088 [2024-11-06 14:10:46.305544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.088 [2024-11-06 14:10:46.313312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.088 [2024-11-06 14:10:46.313330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.088 [2024-11-06 14:10:46.313336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.088 [2024-11-06 14:10:46.322656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.088 [2024-11-06 14:10:46.322674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.088 [2024-11-06 14:10:46.322680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.088 [2024-11-06 14:10:46.331958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.088 [2024-11-06 14:10:46.331975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.088 [2024-11-06 14:10:46.331982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.088 [2024-11-06 14:10:46.341545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.088 [2024-11-06 14:10:46.341563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.088 [2024-11-06 14:10:46.341570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.088 [2024-11-06 14:10:46.350879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.088 [2024-11-06 14:10:46.350897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.088 [2024-11-06 14:10:46.350903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.088 [2024-11-06 14:10:46.359371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.088 [2024-11-06 14:10:46.359388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.088 [2024-11-06 14:10:46.359395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.349 [2024-11-06 14:10:46.369366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.349 [2024-11-06 14:10:46.369384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.349 [2024-11-06 14:10:46.369390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.349 [2024-11-06 14:10:46.378908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.349 [2024-11-06 14:10:46.378925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.349 [2024-11-06 14:10:46.378931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.349 [2024-11-06 14:10:46.385949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.349 [2024-11-06 14:10:46.385966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.349 [2024-11-06 14:10:46.385976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.349 [2024-11-06 14:10:46.397257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.349 [2024-11-06 14:10:46.397274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.349 [2024-11-06 14:10:46.397281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.349 [2024-11-06 14:10:46.408757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.349 [2024-11-06 14:10:46.408774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.349 [2024-11-06 14:10:46.408780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.349 [2024-11-06 14:10:46.420816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.349 [2024-11-06 14:10:46.420834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.349 [2024-11-06 14:10:46.420841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.349 [2024-11-06 14:10:46.428910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.349 [2024-11-06 14:10:46.428928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.349 [2024-11-06 14:10:46.428934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.349 [2024-11-06 14:10:46.440230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.349 [2024-11-06 14:10:46.440247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.349 [2024-11-06 14:10:46.440253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.349 [2024-11-06 14:10:46.451668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.349 [2024-11-06 14:10:46.451686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.349 [2024-11-06 14:10:46.451692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.460920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.460937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.460943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.468769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.468786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.468793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.478367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.478384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.478391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.487382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.487399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.487406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.496524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.496541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.496548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.505160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.505177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.505184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.513999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.514017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.514023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.522821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.522837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.522843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.531317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.531334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.531340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.540131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.540149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.540156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.550712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.550730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.550740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.559522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.559539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.559545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.568180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.568197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.568203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.576070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.576087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.576093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.585415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.585432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.585438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.594063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.594080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.594087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.603086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.603103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.603110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.611494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.611512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.611518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.350 [2024-11-06 14:10:46.621086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.350 [2024-11-06 14:10:46.621103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.350 [2024-11-06 14:10:46.621109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.630771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.630792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.630799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.638308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.638325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.638332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.647444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.647461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.647467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.656064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.656081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.656087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.665348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.665366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.665372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.675039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.675056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.675063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.683960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.683977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.683984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.692460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.692478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.692484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.702659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.702676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.702683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.712737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.712758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.712764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.720853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.720869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.720876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.732176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.732193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.732200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.741352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.741369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.741376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.750613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.750630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.750636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.758372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.758390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.758397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.768189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.768207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.768213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.777113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.777131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.777137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.786469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.786486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.786495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.795043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.795060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.795067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.803319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.803337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.803343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.812953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.613 [2024-11-06 14:10:46.812970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.613 [2024-11-06 14:10:46.812977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.613 [2024-11-06 14:10:46.821649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.614 [2024-11-06 14:10:46.821666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.614 [2024-11-06 14:10:46.821673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.614 [2024-11-06 14:10:46.829089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.614 [2024-11-06 14:10:46.829106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.614 [2024-11-06 14:10:46.829113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.614 [2024-11-06 14:10:46.838758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.614 [2024-11-06 14:10:46.838775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.614 [2024-11-06 14:10:46.838781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.614 [2024-11-06 14:10:46.848540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.614 [2024-11-06 14:10:46.848557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.614 [2024-11-06 14:10:46.848563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.614 [2024-11-06 14:10:46.856576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.614 [2024-11-06 14:10:46.856593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.614 [2024-11-06 14:10:46.856600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.614 [2024-11-06 14:10:46.865412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.614 [2024-11-06 14:10:46.865432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.614 [2024-11-06 14:10:46.865439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.614 [2024-11-06 14:10:46.874810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.614 [2024-11-06 14:10:46.874827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.614 [2024-11-06 14:10:46.874834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.614 [2024-11-06 14:10:46.882894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.614 [2024-11-06 14:10:46.882911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.614 [2024-11-06 14:10:46.882917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.876 [2024-11-06 14:10:46.891946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.876 [2024-11-06 14:10:46.891964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.876 [2024-11-06 14:10:46.891971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.876 [2024-11-06 14:10:46.900842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.876 [2024-11-06 14:10:46.900860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.876 [2024-11-06 14:10:46.900866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.876 [2024-11-06 14:10:46.909644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.876 [2024-11-06 14:10:46.909661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.876 [2024-11-06 14:10:46.909667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.876 [2024-11-06 14:10:46.919001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.876 [2024-11-06 14:10:46.919019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.876 [2024-11-06 14:10:46.919026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.876 [2024-11-06 14:10:46.928299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.876 [2024-11-06 14:10:46.928317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.876 [2024-11-06 14:10:46.928324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.876 [2024-11-06 14:10:46.936314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.876 [2024-11-06 14:10:46.936331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.876 [2024-11-06 14:10:46.936337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.876 [2024-11-06 14:10:46.945679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.876 [2024-11-06 14:10:46.945697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.876 [2024-11-06 14:10:46.945703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.876 [2024-11-06 14:10:46.954599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.876 [2024-11-06 14:10:46.954616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.876 [2024-11-06 14:10:46.954622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.876 [2024-11-06 14:10:46.963213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.876 [2024-11-06 14:10:46.963230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.876 [2024-11-06 14:10:46.963236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.876 [2024-11-06 14:10:46.972272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.876 [2024-11-06 14:10:46.972289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.876 [2024-11-06 14:10:46.972296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.876 [2024-11-06 14:10:46.981471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.876 [2024-11-06 14:10:46.981488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:46.981495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 [2024-11-06 14:10:46.991061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:46.991079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:46.991085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 [2024-11-06 14:10:46.999295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:46.999313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:46.999319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 [2024-11-06 14:10:47.008394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:47.008411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:47.008418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 [2024-11-06 14:10:47.017106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:47.017124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:47.017134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 [2024-11-06 14:10:47.025656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:47.025674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:47.025680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 [2024-11-06 14:10:47.034296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:47.034314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:47.034320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 [2024-11-06 14:10:47.042879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:47.042897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:47.042904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 [2024-11-06 14:10:47.052257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:47.052275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:47.052282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 [2024-11-06 14:10:47.061627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:47.061645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:47.061651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 27420.00 IOPS, 107.11 MiB/s [2024-11-06T13:10:47.157Z] [2024-11-06 14:10:47.070381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:47.070399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:47.070406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 [2024-11-06 14:10:47.079751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:47.079768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:47.079775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 [2024-11-06 14:10:47.088330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:47.088347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:47.088354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 [2024-11-06 14:10:47.097541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:47.097559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:47.097566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 [2024-11-06 14:10:47.106199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:47.106216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:47.106223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 [2024-11-06 14:10:47.115489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:47.115506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:47.115513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 [2024-11-06 14:10:47.123354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:47.123372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:47.123378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 [2024-11-06 14:10:47.133378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:47.133396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:47.133402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 [2024-11-06 14:10:47.141148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:47.141165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:47.141171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.877 [2024-11-06 14:10:47.150623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:00.877 [2024-11-06 14:10:47.150640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.877 [2024-11-06 14:10:47.150647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.140 [2024-11-06 14:10:47.160011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.140 [2024-11-06 14:10:47.160029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.140 [2024-11-06 14:10:47.160037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.140 [2024-11-06 14:10:47.167822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.140 [2024-11-06 14:10:47.167839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.140 [2024-11-06 14:10:47.167849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.140 [2024-11-06 14:10:47.178931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.178949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.178955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.190678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.190695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.190702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.201002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.201020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.201026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.210058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.210075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.210082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.219485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.219502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.219508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.228257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.228274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.228281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.235964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.235981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.235988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.245096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.245113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.245120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.254781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.254804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.254811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.264154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.264171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.264178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.272985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.273002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.273009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.281676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.281694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.281700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.290965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.290982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.290989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.299575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.299592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.299599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.308426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.308443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.308450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.318398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.318415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.318421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.325795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.325813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.325819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.335630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.335647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.335654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.346028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.346045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.346052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.354411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.354430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.354436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.363882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.363900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.363907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.371742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.371764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.371770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.381449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.381467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.381474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.389649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.389667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.389673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.399191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.399208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.399215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.407259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.407277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.407286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.141 [2024-11-06 14:10:47.417330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.141 [2024-11-06 14:10:47.417348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.141 [2024-11-06 14:10:47.417355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.425319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.425337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.425343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.435004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.435021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.435027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.444567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.444585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.444592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.453297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.453315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.453322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.462263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.462281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.462288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.471182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.471200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.471206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.479545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.479562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.479569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.487828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.487849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.487856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.497069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.497087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.497094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.505681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.505699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.505705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.514654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.514671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.514678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.524125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.524143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.524150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.532413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.532431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.532437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.540975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.540993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.540999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.550755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.550773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.550780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.559749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.559767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.559776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.568829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.568847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.568853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.577190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.577207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.577214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.586982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.587000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.587007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.595933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.595950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.595957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.605912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.605929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.605935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.614675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.614693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.614699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.623910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.623927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.623933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.632144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.632162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.632169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.641854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.641875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.641882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.650617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.650635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.650642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.659333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.659350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.659357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.668461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.668479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.668485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.404 [2024-11-06 14:10:47.679182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.404 [2024-11-06 14:10:47.679199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.404 [2024-11-06 14:10:47.679206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.687168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.687186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.687192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.696734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.696756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.696763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.705932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.705950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.705957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.715306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.715323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.715330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.724210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.724227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.724234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.732923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.732941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.732948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.741789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.741806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.741813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.749313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.749331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.749337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.760464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.760482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.760488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.769302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.769320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.769327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.778407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.778425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.778432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.786883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.786901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.786907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.795615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.795633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.795643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.806016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.806033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.806040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.814815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.814832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.814839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.823557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.823575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.823582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.832867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.832885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.832891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.841807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.841825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.841831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.851009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.666 [2024-11-06 14:10:47.851026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.666 [2024-11-06 14:10:47.851033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.666 [2024-11-06 14:10:47.858488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.667 [2024-11-06 14:10:47.858506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.667 [2024-11-06 14:10:47.858512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.667 [2024-11-06 14:10:47.868107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.667 [2024-11-06 14:10:47.868125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.667 [2024-11-06 14:10:47.868132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.667 [2024-11-06 14:10:47.877751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.667 [2024-11-06 14:10:47.877772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.667 [2024-11-06 14:10:47.877779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.667 [2024-11-06 14:10:47.887954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.667 [2024-11-06 14:10:47.887972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.667 [2024-11-06 14:10:47.887978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.667 [2024-11-06 14:10:47.896516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.667 [2024-11-06 14:10:47.896534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.667 [2024-11-06 14:10:47.896541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.667 [2024-11-06 14:10:47.905206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.667 [2024-11-06 14:10:47.905224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.667 [2024-11-06 14:10:47.905230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.667 [2024-11-06 14:10:47.913571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.667 [2024-11-06 14:10:47.913589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.667 [2024-11-06 14:10:47.913595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.667 [2024-11-06 14:10:47.922435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.667 [2024-11-06 14:10:47.922453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.667 [2024-11-06 14:10:47.922459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.667 [2024-11-06 14:10:47.932454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.667 [2024-11-06 14:10:47.932472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.667 [2024-11-06 14:10:47.932478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.667 [2024-11-06 14:10:47.939985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.667 [2024-11-06 14:10:47.940002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.667 [2024-11-06 14:10:47.940009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.928 [2024-11-06 14:10:47.950487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.928 [2024-11-06 14:10:47.950505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.928 [2024-11-06 14:10:47.950512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.928 [2024-11-06 14:10:47.959219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.928 [2024-11-06 14:10:47.959237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.928 [2024-11-06 14:10:47.959243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.928 [2024-11-06 14:10:47.968162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.928 [2024-11-06 14:10:47.968180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.928 [2024-11-06 14:10:47.968186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.928 [2024-11-06 14:10:47.978026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.928 [2024-11-06 14:10:47.978043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.928 [2024-11-06 14:10:47.978049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.928 [2024-11-06 14:10:47.986399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.928 [2024-11-06 14:10:47.986417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.928 [2024-11-06 14:10:47.986423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.928 [2024-11-06 14:10:47.995707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.928 [2024-11-06 14:10:47.995725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.928 [2024-11-06 14:10:47.995731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.928 [2024-11-06 14:10:48.004956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.928 [2024-11-06 14:10:48.004974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.928 [2024-11-06 14:10:48.004980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.928 [2024-11-06 14:10:48.011988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.928 [2024-11-06 14:10:48.012006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.928 [2024-11-06 14:10:48.012013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.928 [2024-11-06 14:10:48.022012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.928 [2024-11-06 14:10:48.022030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.928 [2024-11-06 14:10:48.022037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.928 [2024-11-06 14:10:48.031493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.928 [2024-11-06 14:10:48.031511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.928 [2024-11-06 14:10:48.031521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.928 [2024-11-06 14:10:48.040429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.928 [2024-11-06 14:10:48.040446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.928 [2024-11-06 14:10:48.040453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.928 [2024-11-06 14:10:48.050545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.928 [2024-11-06 14:10:48.050562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.928 [2024-11-06 14:10:48.050569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.928 [2024-11-06 14:10:48.059443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.928 [2024-11-06 14:10:48.059461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.928 [2024-11-06 14:10:48.059467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.928 27792.50 IOPS, 108.56 MiB/s [2024-11-06T13:10:48.208Z] [2024-11-06 14:10:48.069720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4a1c0) 00:29:01.928 [2024-11-06 14:10:48.069734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.928 [2024-11-06 14:10:48.069741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.928 00:29:01.929 Latency(us) 00:29:01.929 [2024-11-06T13:10:48.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.929 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:01.929 nvme0n1 : 2.00 27811.89 108.64 0.00 0.00 4597.89 2293.76 16493.23 00:29:01.929 [2024-11-06T13:10:48.209Z] =================================================================================================================== 00:29:01.929 [2024-11-06T13:10:48.209Z] Total : 27811.89 108.64 0.00 0.00 4597.89 2293.76 16493.23 00:29:01.929 { 00:29:01.929 "results": [ 00:29:01.929 { 00:29:01.929 "job": "nvme0n1", 00:29:01.929 "core_mask": "0x2", 00:29:01.929 "workload": "randread", 00:29:01.929 "status": "finished", 00:29:01.929 "queue_depth": 128, 00:29:01.929 "io_size": 4096, 00:29:01.929 "runtime": 2.003208, 00:29:01.929 "iops": 27811.889728874885, 00:29:01.929 "mibps": 108.64019425341752, 00:29:01.929 "io_failed": 0, 00:29:01.929 "io_timeout": 0, 00:29:01.929 "avg_latency_us": 4597.88541226165, 00:29:01.929 "min_latency_us": 2293.76, 00:29:01.929 "max_latency_us": 16493.226666666666 00:29:01.929 } 00:29:01.929 ], 00:29:01.929 "core_count": 1 00:29:01.929 } 00:29:01.929 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:01.929 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:01.929 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:01.929 | .driver_specific 00:29:01.929 | .nvme_error 00:29:01.929 | .status_code 00:29:01.929 | .command_transient_transport_error' 00:29:01.929 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2585823 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2585823 ']' 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2585823 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2585823 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2585823' 00:29:02.188 killing process with pid 2585823 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2585823 00:29:02.188 Received shutdown signal, test time was about 2.000000 seconds 00:29:02.188 00:29:02.188 Latency(us) 00:29:02.188 [2024-11-06T13:10:48.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.188 [2024-11-06T13:10:48.468Z] =================================================================================================================== 00:29:02.188 [2024-11-06T13:10:48.468Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2585823 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2586505 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2586505 /var/tmp/bperf.sock 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2586505 ']' 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:02.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:02.188 14:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:02.448 [2024-11-06 14:10:48.492292] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:29:02.448 [2024-11-06 14:10:48.492350] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2586505 ] 00:29:02.448 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:02.448 Zero copy mechanism will not be used. 00:29:02.448 [2024-11-06 14:10:48.575937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.448 [2024-11-06 14:10:48.605430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.387 14:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:03.387 14:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:03.387 14:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:03.387 14:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:03.387 14:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:03.387 14:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.387 14:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:03.387 14:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.387 14:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.387 14:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.648 nvme0n1 00:29:03.648 14:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:03.648 14:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.648 14:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:03.648 14:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.648 14:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:03.648 14:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:03.648 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:03.648 Zero copy mechanism will not be used. 00:29:03.648 Running I/O for 2 seconds... 00:29:03.648 [2024-11-06 14:10:49.848214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.648 [2024-11-06 14:10:49.848248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-11-06 14:10:49.848257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.648 [2024-11-06 14:10:49.856732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.648 [2024-11-06 14:10:49.856759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-11-06 14:10:49.856766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.648 [2024-11-06 14:10:49.865018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.648 [2024-11-06 14:10:49.865040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-11-06 14:10:49.865048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.648 [2024-11-06 14:10:49.872653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.648 [2024-11-06 14:10:49.872673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-11-06 14:10:49.872680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.648 [2024-11-06 14:10:49.878486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.648 [2024-11-06 14:10:49.878505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-11-06 14:10:49.878512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.648 [2024-11-06 14:10:49.882997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.648 [2024-11-06 14:10:49.883017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-11-06 14:10:49.883023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.648 [2024-11-06 14:10:49.893161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.648 [2024-11-06 14:10:49.893181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-11-06 14:10:49.893188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.648 [2024-11-06 14:10:49.897564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.648 [2024-11-06 14:10:49.897583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-11-06 14:10:49.897590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.648 [2024-11-06 14:10:49.901906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.648 [2024-11-06 14:10:49.901925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-11-06 14:10:49.901931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.648 [2024-11-06 14:10:49.909898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.648 [2024-11-06 14:10:49.909917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-11-06 14:10:49.909924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.649 [2024-11-06 14:10:49.919446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.649 [2024-11-06 14:10:49.919465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-11-06 14:10:49.919472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.649 [2024-11-06 14:10:49.924962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.649 [2024-11-06 14:10:49.924981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-11-06 14:10:49.924991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:49.935703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:49.935721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:49.935728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:49.945686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:49.945705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:49.945711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:49.956191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:49.956209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:49.956216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:49.967751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:49.967770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:49.967777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:49.979589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:49.979608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:49.979615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:49.990165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:49.990184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:49.990190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.000371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.000390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.000397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.007386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.007407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.007415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.011786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.011809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.011816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.016855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.016874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.016881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.021533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.021553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.021560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.028906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.028925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.028932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.038847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.038866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.038873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.050566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.050585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.050592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.058885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.058905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.058911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.064057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.064076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.064082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.068585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.068604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.068610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.072965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.072984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.072990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.077501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.077520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.077527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.081858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.081878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.081884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.091855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.091874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.091881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.101854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.101878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.101886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.106753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.106773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.106780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.114376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.114395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.114402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.125769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.125788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-11-06 14:10:50.125795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.910 [2024-11-06 14:10:50.134752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.910 [2024-11-06 14:10:50.134771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.911 [2024-11-06 14:10:50.134782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.911 [2024-11-06 14:10:50.143448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.911 [2024-11-06 14:10:50.143467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.911 [2024-11-06 14:10:50.143474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.911 [2024-11-06 14:10:50.148167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.911 [2024-11-06 14:10:50.148186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.911 [2024-11-06 14:10:50.148193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.911 [2024-11-06 14:10:50.157843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.911 [2024-11-06 14:10:50.157862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.911 [2024-11-06 14:10:50.157869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.911 [2024-11-06 14:10:50.166303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.911 [2024-11-06 14:10:50.166322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.911 [2024-11-06 14:10:50.166329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.911 [2024-11-06 14:10:50.172644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.911 [2024-11-06 14:10:50.172662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.911 [2024-11-06 14:10:50.172669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.911 [2024-11-06 14:10:50.182549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:03.911 [2024-11-06 14:10:50.182568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.911 [2024-11-06 14:10:50.182574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.172 [2024-11-06 14:10:50.188514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.172 [2024-11-06 14:10:50.188533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.172 [2024-11-06 14:10:50.188539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.172 [2024-11-06 14:10:50.195065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.172 [2024-11-06 14:10:50.195083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.172 [2024-11-06 14:10:50.195089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.172 [2024-11-06 14:10:50.197775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.172 [2024-11-06 14:10:50.197796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.172 [2024-11-06 14:10:50.197802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.172 [2024-11-06 14:10:50.205245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.172 [2024-11-06 14:10:50.205263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.172 [2024-11-06 14:10:50.205269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.172 [2024-11-06 14:10:50.214796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.172 [2024-11-06 14:10:50.214815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.172 [2024-11-06 14:10:50.214823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.172 [2024-11-06 14:10:50.227384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.172 [2024-11-06 14:10:50.227401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.172 [2024-11-06 14:10:50.227408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.172 [2024-11-06 14:10:50.238490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.172 [2024-11-06 14:10:50.238507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.172 [2024-11-06 14:10:50.238513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.172 [2024-11-06 14:10:50.250349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.172 [2024-11-06 14:10:50.250367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.172 [2024-11-06 14:10:50.250373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.172 [2024-11-06 14:10:50.261740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.172 [2024-11-06 14:10:50.261762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.172 [2024-11-06 14:10:50.261768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.172 [2024-11-06 14:10:50.274370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.172 [2024-11-06 14:10:50.274389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.172 [2024-11-06 14:10:50.274395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.172 [2024-11-06 14:10:50.282756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.172 [2024-11-06 14:10:50.282774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.172 [2024-11-06 14:10:50.282781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.172 [2024-11-06 14:10:50.290115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.172 [2024-11-06 14:10:50.290133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.172 [2024-11-06 14:10:50.290140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.172 [2024-11-06 14:10:50.298462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.172 [2024-11-06 14:10:50.298480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.172 [2024-11-06 14:10:50.298486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.172 [2024-11-06 14:10:50.308269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.172 [2024-11-06 14:10:50.308287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.172 [2024-11-06 14:10:50.308295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.172 [2024-11-06 14:10:50.314194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.172 [2024-11-06 14:10:50.314212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.172 [2024-11-06 14:10:50.314219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.172 [2024-11-06 14:10:50.324904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.172 [2024-11-06 14:10:50.324923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.172 [2024-11-06 14:10:50.324929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.172 [2024-11-06 14:10:50.336784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.172 [2024-11-06 14:10:50.336801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.172 [2024-11-06 14:10:50.336808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.172 [2024-11-06 14:10:50.348281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.172 [2024-11-06 14:10:50.348299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.173 [2024-11-06 14:10:50.348306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.173 [2024-11-06 14:10:50.360611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.173 [2024-11-06 14:10:50.360629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.173 [2024-11-06 14:10:50.360636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.173 [2024-11-06 14:10:50.374188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.173 [2024-11-06 14:10:50.374205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.173 [2024-11-06 14:10:50.374215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.173 [2024-11-06 14:10:50.385649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.173 [2024-11-06 14:10:50.385666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.173 [2024-11-06 14:10:50.385673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.173 [2024-11-06 14:10:50.397164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.173 [2024-11-06 14:10:50.397182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.173 [2024-11-06 14:10:50.397188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.173 [2024-11-06 14:10:50.405517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.173 [2024-11-06 14:10:50.405535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.173 [2024-11-06 14:10:50.405541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.173 [2024-11-06 14:10:50.413100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.173 [2024-11-06 14:10:50.413118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.173 [2024-11-06 14:10:50.413124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.173 [2024-11-06 14:10:50.422932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.173 [2024-11-06 14:10:50.422950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.173 [2024-11-06 14:10:50.422956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.173 [2024-11-06 14:10:50.432643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.173 [2024-11-06 14:10:50.432661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.173 [2024-11-06 14:10:50.432668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.173 [2024-11-06 14:10:50.443111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.173 [2024-11-06 14:10:50.443129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.173 [2024-11-06 14:10:50.443136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.435 [2024-11-06 14:10:50.456149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.435 [2024-11-06 14:10:50.456167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.435 [2024-11-06 14:10:50.456174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.435 [2024-11-06 14:10:50.463258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.435 [2024-11-06 14:10:50.463276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.435 [2024-11-06 14:10:50.463282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.435 [2024-11-06 14:10:50.476586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.435 [2024-11-06 14:10:50.476604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.435 [2024-11-06 14:10:50.476611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.435 [2024-11-06 14:10:50.486003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.435 [2024-11-06 14:10:50.486021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.435 [2024-11-06 14:10:50.486027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.435 [2024-11-06 14:10:50.496530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.435 [2024-11-06 14:10:50.496548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.435 [2024-11-06 14:10:50.496555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.435 [2024-11-06 14:10:50.507888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.435 [2024-11-06 14:10:50.507905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.435 [2024-11-06 14:10:50.507912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.435 [2024-11-06 14:10:50.516774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.435 [2024-11-06 14:10:50.516791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.435 [2024-11-06 14:10:50.516798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.435 [2024-11-06 14:10:50.527416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.435 [2024-11-06 14:10:50.527434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.435 [2024-11-06 14:10:50.527441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.435 [2024-11-06 14:10:50.539243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.435 [2024-11-06 14:10:50.539261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.435 [2024-11-06 14:10:50.539267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.435 [2024-11-06 14:10:50.542634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.435 [2024-11-06 14:10:50.542653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.435 [2024-11-06 14:10:50.542662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.435 [2024-11-06 14:10:50.547567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.435 [2024-11-06 14:10:50.547585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.435 [2024-11-06 14:10:50.547592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.435 [2024-11-06 14:10:50.553808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.435 [2024-11-06 14:10:50.553826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.435 [2024-11-06 14:10:50.553833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.435 [2024-11-06 14:10:50.562417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.435 [2024-11-06 14:10:50.562435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.435 [2024-11-06 14:10:50.562442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.435 [2024-11-06 14:10:50.568687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.435 [2024-11-06 14:10:50.568705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.435 [2024-11-06 14:10:50.568712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.435 [2024-11-06 14:10:50.575737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.435 [2024-11-06 14:10:50.575760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.435 [2024-11-06 14:10:50.575766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.435 [2024-11-06 14:10:50.586997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.435 [2024-11-06 14:10:50.587014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.435 [2024-11-06 14:10:50.587021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.436 [2024-11-06 14:10:50.592731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.436 [2024-11-06 14:10:50.592756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.436 [2024-11-06 14:10:50.592762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.436 [2024-11-06 14:10:50.599415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.436 [2024-11-06 14:10:50.599433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.436 [2024-11-06 14:10:50.599440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.436 [2024-11-06 14:10:50.606812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.436 [2024-11-06 14:10:50.606834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.436 [2024-11-06 14:10:50.606841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.436 [2024-11-06 14:10:50.617069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.436 [2024-11-06 14:10:50.617088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.436 [2024-11-06 14:10:50.617094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.436 [2024-11-06 14:10:50.624799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.436 [2024-11-06 14:10:50.624818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.436 [2024-11-06 14:10:50.624824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.436 [2024-11-06 14:10:50.631658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.436 [2024-11-06 14:10:50.631677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.436 [2024-11-06 14:10:50.631684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.436 [2024-11-06 14:10:50.639469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.436 [2024-11-06 14:10:50.639488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.436 [2024-11-06 14:10:50.639494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.436 [2024-11-06 14:10:50.648489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.436 [2024-11-06 14:10:50.648508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.436 [2024-11-06 14:10:50.648514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.436 [2024-11-06 14:10:50.655824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.436 [2024-11-06 14:10:50.655843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.436 [2024-11-06 14:10:50.655849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.436 [2024-11-06 14:10:50.663180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.436 [2024-11-06 14:10:50.663199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.436 [2024-11-06 14:10:50.663206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.436 [2024-11-06 14:10:50.669960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.436 [2024-11-06 14:10:50.669978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.436 [2024-11-06 14:10:50.669985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.436 [2024-11-06 14:10:50.676770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.436 [2024-11-06 14:10:50.676789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.436 [2024-11-06 14:10:50.676795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.436 [2024-11-06 14:10:50.686695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.436 [2024-11-06 14:10:50.686714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.436 [2024-11-06 14:10:50.686720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.436 [2024-11-06 14:10:50.691521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.436 [2024-11-06 14:10:50.691539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.436 [2024-11-06 14:10:50.691545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.436 [2024-11-06 14:10:50.698988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.436 [2024-11-06 14:10:50.699006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.436 [2024-11-06 14:10:50.699013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.436 [2024-11-06 14:10:50.708969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.436 [2024-11-06 14:10:50.708987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.436 [2024-11-06 14:10:50.708993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.716714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.716733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.716740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.725995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.726014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.726021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.736268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.736287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.736294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.747097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.747115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.747125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.757731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.757755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.757761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.764973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.764991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.764998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.773577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.773595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.773602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.779004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.779023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.779030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.785839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.785858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.785865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.792772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.792790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.792797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.801520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.801539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.801545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.809535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.809553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.809559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.819413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.819435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.819441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.828253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.828272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.828278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.698 3652.00 IOPS, 456.50 MiB/s [2024-11-06T13:10:50.978Z] [2024-11-06 14:10:50.836552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.836570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.836576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.846216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.846235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.846241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.852819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.852837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.852843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.858889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.858908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.858914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.870107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.870125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.870132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.878447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.878466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.878472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.883776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.883794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.883801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.892357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.892376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.892383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.900758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.900775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.900783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.912310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.912329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.912335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.921167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.921186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.921192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.928688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.928707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.928714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.940010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.940029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.940036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.946632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.946651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.946658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.957506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.957525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.957531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.963521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.963543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.963550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.970854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.970871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.970878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.698 [2024-11-06 14:10:50.973318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.698 [2024-11-06 14:10:50.973335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.698 [2024-11-06 14:10:50.973342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.961 [2024-11-06 14:10:50.978370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.961 [2024-11-06 14:10:50.978389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.961 [2024-11-06 14:10:50.978395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.961 [2024-11-06 14:10:50.983800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.961 [2024-11-06 14:10:50.983819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.961 [2024-11-06 14:10:50.983826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.961 [2024-11-06 14:10:50.988965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.961 [2024-11-06 14:10:50.988984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.961 [2024-11-06 14:10:50.988990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.961 [2024-11-06 14:10:50.999210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.961 [2024-11-06 14:10:50.999230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.961 [2024-11-06 14:10:50.999236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.961 [2024-11-06 14:10:51.010091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.961 [2024-11-06 14:10:51.010110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.961 [2024-11-06 14:10:51.010117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.961 [2024-11-06 14:10:51.019515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.961 [2024-11-06 14:10:51.019533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.961 [2024-11-06 14:10:51.019541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.961 [2024-11-06 14:10:51.026737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.961 [2024-11-06 14:10:51.026761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.961 [2024-11-06 14:10:51.026767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.961 [2024-11-06 14:10:51.036680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.961 [2024-11-06 14:10:51.036699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.961 [2024-11-06 14:10:51.036706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.961 [2024-11-06 14:10:51.046065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.961 [2024-11-06 14:10:51.046083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.961 [2024-11-06 14:10:51.046090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.961 [2024-11-06 14:10:51.055538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.961 [2024-11-06 14:10:51.055557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.961 [2024-11-06 14:10:51.055563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.961 [2024-11-06 14:10:51.063818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.961 [2024-11-06 14:10:51.063837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.961 [2024-11-06 14:10:51.063844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.961 [2024-11-06 14:10:51.069705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.961 [2024-11-06 14:10:51.069724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.962 [2024-11-06 14:10:51.069731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.962 [2024-11-06 14:10:51.076314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.962 [2024-11-06 14:10:51.076333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.962 [2024-11-06 14:10:51.076340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.962 [2024-11-06 14:10:51.086050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.962 [2024-11-06 14:10:51.086069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.962 [2024-11-06 14:10:51.086075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.962 [2024-11-06 14:10:51.096080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.962 [2024-11-06 14:10:51.096099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.962 [2024-11-06 14:10:51.096109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.962 [2024-11-06 14:10:51.105160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.962 [2024-11-06 14:10:51.105180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.962 [2024-11-06 14:10:51.105186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.962 [2024-11-06 14:10:51.113856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.962 [2024-11-06 14:10:51.113874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.962 [2024-11-06 14:10:51.113882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.962 [2024-11-06 14:10:51.120046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.962 [2024-11-06 14:10:51.120065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.962 [2024-11-06 14:10:51.120072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.962 [2024-11-06 14:10:51.128981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.962 [2024-11-06 14:10:51.129001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.962 [2024-11-06 14:10:51.129007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.962 [2024-11-06 14:10:51.139700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.962 [2024-11-06 14:10:51.139719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.962 [2024-11-06 14:10:51.139725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.962 [2024-11-06 14:10:51.144734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.962 [2024-11-06 14:10:51.144760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.962 [2024-11-06 14:10:51.144766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.962 [2024-11-06 14:10:51.150088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.962 [2024-11-06 14:10:51.150107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.962 [2024-11-06 14:10:51.150114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.962 [2024-11-06 14:10:51.159173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.962 [2024-11-06 14:10:51.159192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.962 [2024-11-06 14:10:51.159199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.962 [2024-11-06 14:10:51.164813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.962 [2024-11-06 14:10:51.164837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.962 [2024-11-06 14:10:51.164844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.962 [2024-11-06 14:10:51.171750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.962 [2024-11-06 14:10:51.171769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.962 [2024-11-06 14:10:51.171775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.962 [2024-11-06 14:10:51.178914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.962 [2024-11-06 14:10:51.178933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.962 [2024-11-06 14:10:51.178939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.962 [2024-11-06 14:10:51.185023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.962 [2024-11-06 14:10:51.185042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.962 [2024-11-06 14:10:51.185048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.963 [2024-11-06 14:10:51.190546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.963 [2024-11-06 14:10:51.190565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.963 [2024-11-06 14:10:51.190571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.963 [2024-11-06 14:10:51.194983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.963 [2024-11-06 14:10:51.195002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.963 [2024-11-06 14:10:51.195008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.963 [2024-11-06 14:10:51.205456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.963 [2024-11-06 14:10:51.205475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.963 [2024-11-06 14:10:51.205481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.963 [2024-11-06 14:10:51.210194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.963 [2024-11-06 14:10:51.210212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.963 [2024-11-06 14:10:51.210218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.963 [2024-11-06 14:10:51.214560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.963 [2024-11-06 14:10:51.214579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.963 [2024-11-06 14:10:51.214586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.963 [2024-11-06 14:10:51.218888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.963 [2024-11-06 14:10:51.218907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.963 [2024-11-06 14:10:51.218913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.963 [2024-11-06 14:10:51.224466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.963 [2024-11-06 14:10:51.224485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.963 [2024-11-06 14:10:51.224491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.963 [2024-11-06 14:10:51.228695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.963 [2024-11-06 14:10:51.228714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.963 [2024-11-06 14:10:51.228721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.963 [2024-11-06 14:10:51.236726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:04.963 [2024-11-06 14:10:51.236750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.963 [2024-11-06 14:10:51.236757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.226 [2024-11-06 14:10:51.241406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.226 [2024-11-06 14:10:51.241425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.226 [2024-11-06 14:10:51.241432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.226 [2024-11-06 14:10:51.245714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.226 [2024-11-06 14:10:51.245733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.226 [2024-11-06 14:10:51.245740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.226 [2024-11-06 14:10:51.249982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.226 [2024-11-06 14:10:51.250001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.226 [2024-11-06 14:10:51.250007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.226 [2024-11-06 14:10:51.256175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.226 [2024-11-06 14:10:51.256194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.226 [2024-11-06 14:10:51.256200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.226 [2024-11-06 14:10:51.260686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.226 [2024-11-06 14:10:51.260705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.226 [2024-11-06 14:10:51.260715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.226 [2024-11-06 14:10:51.265009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.226 [2024-11-06 14:10:51.265029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.226 [2024-11-06 14:10:51.265035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.226 [2024-11-06 14:10:51.269347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.226 [2024-11-06 14:10:51.269365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.226 [2024-11-06 14:10:51.269372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.226 [2024-11-06 14:10:51.273846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.226 [2024-11-06 14:10:51.273865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.226 [2024-11-06 14:10:51.273871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.226 [2024-11-06 14:10:51.279471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.226 [2024-11-06 14:10:51.279490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.226 [2024-11-06 14:10:51.279496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.226 [2024-11-06 14:10:51.284053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.226 [2024-11-06 14:10:51.284072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.226 [2024-11-06 14:10:51.284078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.226 [2024-11-06 14:10:51.295256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.226 [2024-11-06 14:10:51.295275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.226 [2024-11-06 14:10:51.295282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.226 [2024-11-06 14:10:51.306602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.226 [2024-11-06 14:10:51.306621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.226 [2024-11-06 14:10:51.306627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.226 [2024-11-06 14:10:51.318191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.226 [2024-11-06 14:10:51.318210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.226 [2024-11-06 14:10:51.318217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.226 [2024-11-06 14:10:51.329664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.226 [2024-11-06 14:10:51.329683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.226 [2024-11-06 14:10:51.329689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.226 [2024-11-06 14:10:51.340556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.226 [2024-11-06 14:10:51.340575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.226 [2024-11-06 14:10:51.340582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.226 [2024-11-06 14:10:51.346460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.226 [2024-11-06 14:10:51.346479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.346485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.227 [2024-11-06 14:10:51.350923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.227 [2024-11-06 14:10:51.350941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.350947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.227 [2024-11-06 14:10:51.355361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.227 [2024-11-06 14:10:51.355380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.355386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.227 [2024-11-06 14:10:51.360355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.227 [2024-11-06 14:10:51.360374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.360380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.227 [2024-11-06 14:10:51.364882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.227 [2024-11-06 14:10:51.364901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.364907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.227 [2024-11-06 14:10:51.374623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.227 [2024-11-06 14:10:51.374643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.374649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.227 [2024-11-06 14:10:51.381822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.227 [2024-11-06 14:10:51.381840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.381850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.227 [2024-11-06 14:10:51.384970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.227 [2024-11-06 14:10:51.384988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.384995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.227 [2024-11-06 14:10:51.392307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.227 [2024-11-06 14:10:51.392326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.392333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.227 [2024-11-06 14:10:51.397081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.227 [2024-11-06 14:10:51.397100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.397107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.227 [2024-11-06 14:10:51.405419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.227 [2024-11-06 14:10:51.405438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.405445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.227 [2024-11-06 14:10:51.414648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.227 [2024-11-06 14:10:51.414666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.414673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.227 [2024-11-06 14:10:51.423919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.227 [2024-11-06 14:10:51.423938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.423944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.227 [2024-11-06 14:10:51.435890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.227 [2024-11-06 14:10:51.435909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.435915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.227 [2024-11-06 14:10:51.448488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.227 [2024-11-06 14:10:51.448506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.448513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.227 [2024-11-06 14:10:51.459755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.227 [2024-11-06 14:10:51.459777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.459783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.227 [2024-11-06 14:10:51.471203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.227 [2024-11-06 14:10:51.471221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.471228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.227 [2024-11-06 14:10:51.483619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.227 [2024-11-06 14:10:51.483638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.483644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.227 [2024-11-06 14:10:51.496362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.227 [2024-11-06 14:10:51.496381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.227 [2024-11-06 14:10:51.496387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.489 [2024-11-06 14:10:51.507866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.489 [2024-11-06 14:10:51.507885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.489 [2024-11-06 14:10:51.507892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.489 [2024-11-06 14:10:51.519707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.489 [2024-11-06 14:10:51.519726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.489 [2024-11-06 14:10:51.519734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.489 [2024-11-06 14:10:51.529542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.489 [2024-11-06 14:10:51.529561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.489 [2024-11-06 14:10:51.529567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.489 [2024-11-06 14:10:51.541414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.489 [2024-11-06 14:10:51.541433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.489 [2024-11-06 14:10:51.541439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.489 [2024-11-06 14:10:51.552238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.489 [2024-11-06 14:10:51.552256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.489 [2024-11-06 14:10:51.552262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.489 [2024-11-06 14:10:51.561210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.489 [2024-11-06 14:10:51.561230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.489 [2024-11-06 14:10:51.561236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.489 [2024-11-06 14:10:51.567896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.489 [2024-11-06 14:10:51.567915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.489 [2024-11-06 14:10:51.567922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.489 [2024-11-06 14:10:51.573505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.489 [2024-11-06 14:10:51.573523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.489 [2024-11-06 14:10:51.573530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.489 [2024-11-06 14:10:51.582493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.489 [2024-11-06 14:10:51.582512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.489 [2024-11-06 14:10:51.582518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.489 [2024-11-06 14:10:51.593143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.489 [2024-11-06 14:10:51.593162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.489 [2024-11-06 14:10:51.593168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.489 [2024-11-06 14:10:51.604894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.489 [2024-11-06 14:10:51.604913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.489 [2024-11-06 14:10:51.604919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.489 [2024-11-06 14:10:51.614546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.489 [2024-11-06 14:10:51.614565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.489 [2024-11-06 14:10:51.614572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.489 [2024-11-06 14:10:51.621396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.489 [2024-11-06 14:10:51.621415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.489 [2024-11-06 14:10:51.621422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.489 [2024-11-06 14:10:51.628801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.489 [2024-11-06 14:10:51.628820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.489 [2024-11-06 14:10:51.628829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.489 [2024-11-06 14:10:51.635896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.489 [2024-11-06 14:10:51.635916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.489 [2024-11-06 14:10:51.635922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.489 [2024-11-06 14:10:51.643449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.489 [2024-11-06 14:10:51.643467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.489 [2024-11-06 14:10:51.643474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.489 [2024-11-06 14:10:51.647945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.489 [2024-11-06 14:10:51.647964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.489 [2024-11-06 14:10:51.647970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.489 [2024-11-06 14:10:51.655426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.490 [2024-11-06 14:10:51.655445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.490 [2024-11-06 14:10:51.655451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.490 [2024-11-06 14:10:51.660685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.490 [2024-11-06 14:10:51.660704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.490 [2024-11-06 14:10:51.660710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.490 [2024-11-06 14:10:51.669064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.490 [2024-11-06 14:10:51.669083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.490 [2024-11-06 14:10:51.669090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.490 [2024-11-06 14:10:51.673463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.490 [2024-11-06 14:10:51.673481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.490 [2024-11-06 14:10:51.673487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.490 [2024-11-06 14:10:51.680421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.490 [2024-11-06 14:10:51.680440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.490 [2024-11-06 14:10:51.680446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.490 [2024-11-06 14:10:51.691605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.490 [2024-11-06 14:10:51.691624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.490 [2024-11-06 14:10:51.691631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.490 [2024-11-06 14:10:51.702361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.490 [2024-11-06 14:10:51.702381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.490 [2024-11-06 14:10:51.702387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.490 [2024-11-06 14:10:51.714125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.490 [2024-11-06 14:10:51.714144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.490 [2024-11-06 14:10:51.714150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.490 [2024-11-06 14:10:51.723549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.490 [2024-11-06 14:10:51.723568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.490 [2024-11-06 14:10:51.723574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.490 [2024-11-06 14:10:51.733095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.490 [2024-11-06 14:10:51.733114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.490 [2024-11-06 14:10:51.733121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.490 [2024-11-06 14:10:51.743555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.490 [2024-11-06 14:10:51.743573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.490 [2024-11-06 14:10:51.743580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.490 [2024-11-06 14:10:51.753988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.490 [2024-11-06 14:10:51.754007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.490 [2024-11-06 14:10:51.754013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.490 [2024-11-06 14:10:51.762878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.490 [2024-11-06 14:10:51.762897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.490 [2024-11-06 14:10:51.762903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.751 [2024-11-06 14:10:51.773808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.751 [2024-11-06 14:10:51.773827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.751 [2024-11-06 14:10:51.773837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.751 [2024-11-06 14:10:51.785352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.751 [2024-11-06 14:10:51.785371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.751 [2024-11-06 14:10:51.785377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.751 [2024-11-06 14:10:51.796700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.751 [2024-11-06 14:10:51.796719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.751 [2024-11-06 14:10:51.796725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.751 [2024-11-06 14:10:51.807228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.751 [2024-11-06 14:10:51.807247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.751 [2024-11-06 14:10:51.807253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.751 [2024-11-06 14:10:51.817896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.751 [2024-11-06 14:10:51.817914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.751 [2024-11-06 14:10:51.817920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.751 [2024-11-06 14:10:51.828872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.751 [2024-11-06 14:10:51.828891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.751 [2024-11-06 14:10:51.828897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.751 3746.50 IOPS, 468.31 MiB/s [2024-11-06T13:10:52.031Z] [2024-11-06 14:10:51.839446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b7a60) 00:29:05.751 [2024-11-06 14:10:51.839464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.751 [2024-11-06 14:10:51.839471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.751 00:29:05.751 Latency(us) 00:29:05.751 [2024-11-06T13:10:52.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.751 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:05.751 nvme0n1 : 2.00 3749.26 468.66 0.00 0.00 4263.46 587.09 13653.33 00:29:05.751 [2024-11-06T13:10:52.031Z] =================================================================================================================== 00:29:05.751 [2024-11-06T13:10:52.031Z] Total : 3749.26 468.66 0.00 0.00 4263.46 587.09 13653.33 00:29:05.751 { 00:29:05.751 "results": [ 00:29:05.751 { 00:29:05.751 "job": "nvme0n1", 00:29:05.751 "core_mask": "0x2", 00:29:05.751 "workload": "randread", 00:29:05.751 "status": "finished", 00:29:05.751 "queue_depth": 16, 00:29:05.751 "io_size": 131072, 00:29:05.751 "runtime": 2.002796, 00:29:05.751 "iops": 3749.258536565881, 00:29:05.751 "mibps": 468.65731707073513, 00:29:05.751 "io_failed": 0, 00:29:05.751 "io_timeout": 0, 00:29:05.751 "avg_latency_us": 4263.457660585075, 00:29:05.751 "min_latency_us": 587.0933333333334, 00:29:05.751 "max_latency_us": 13653.333333333334 00:29:05.751 } 00:29:05.751 ], 00:29:05.751 "core_count": 1 00:29:05.751 } 00:29:05.751 14:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:05.751 14:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:05.751 14:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:05.751 | .driver_specific 00:29:05.751 | .nvme_error 00:29:05.751 | .status_code 00:29:05.751 | .command_transient_transport_error' 00:29:05.751 14:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 242 > 0 )) 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2586505 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2586505 ']' 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2586505 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2586505 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2586505' 00:29:06.012 killing process with pid 2586505 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2586505 00:29:06.012 Received shutdown signal, test time was about 2.000000 seconds 00:29:06.012 00:29:06.012 Latency(us) 00:29:06.012 [2024-11-06T13:10:52.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.012 [2024-11-06T13:10:52.292Z] =================================================================================================================== 00:29:06.012 [2024-11-06T13:10:52.292Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2586505 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2587192 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2587192 /var/tmp/bperf.sock 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2587192 ']' 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:06.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:06.012 14:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:06.012 [2024-11-06 14:10:52.279725] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:29:06.012 [2024-11-06 14:10:52.279787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2587192 ] 00:29:06.273 [2024-11-06 14:10:52.366863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.273 [2024-11-06 14:10:52.396481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.842 14:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:06.842 14:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:06.842 14:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:06.842 14:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:07.102 14:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:07.102 14:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.102 14:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:07.102 14:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.102 14:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.102 14:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.362 nvme0n1 00:29:07.362 14:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:07.362 14:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.362 14:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:07.362 14:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.362 14:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:07.362 14:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:07.362 Running I/O for 2 seconds... 00:29:07.362 [2024-11-06 14:10:53.638480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eebb98 00:29:07.362 [2024-11-06 14:10:53.639365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.362 [2024-11-06 14:10:53.639394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.647097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eefae0 00:29:07.624 [2024-11-06 14:10:53.647948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.647971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.656022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef20d8 00:29:07.624 [2024-11-06 14:10:53.656811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.656828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.664561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eed4e8 00:29:07.624 [2024-11-06 14:10:53.665375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.665392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.673127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016edece0 00:29:07.624 [2024-11-06 14:10:53.673934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.673951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.681642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eefae0 00:29:07.624 [2024-11-06 14:10:53.682416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.682433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.690137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef20d8 00:29:07.624 [2024-11-06 14:10:53.690915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.690933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.698617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eed4e8 00:29:07.624 [2024-11-06 14:10:53.699382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.699399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.707166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016edece0 00:29:07.624 [2024-11-06 14:10:53.707959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.707976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.715655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eefae0 00:29:07.624 [2024-11-06 14:10:53.716470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.716487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.724173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef20d8 00:29:07.624 [2024-11-06 14:10:53.724959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.724976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.732646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eed4e8 00:29:07.624 [2024-11-06 14:10:53.733457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.733473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.741137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016edece0 00:29:07.624 [2024-11-06 14:10:53.741954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.741971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.749626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eefae0 00:29:07.624 [2024-11-06 14:10:53.750404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.750420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.758135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef20d8 00:29:07.624 [2024-11-06 14:10:53.758901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.758918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.766615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eed4e8 00:29:07.624 [2024-11-06 14:10:53.767418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.767434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.775106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016edece0 00:29:07.624 [2024-11-06 14:10:53.775866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.775882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.783570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eefae0 00:29:07.624 [2024-11-06 14:10:53.784383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.784399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.792073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef20d8 00:29:07.624 [2024-11-06 14:10:53.792837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.792853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.800929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eed4e8 00:29:07.624 [2024-11-06 14:10:53.801932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.801949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.809323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee27f0 00:29:07.624 [2024-11-06 14:10:53.810281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.810297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.817802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee38d0 00:29:07.624 [2024-11-06 14:10:53.818782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.818797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.826339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eea680 00:29:07.624 [2024-11-06 14:10:53.827325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.827341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.834824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eebb98 00:29:07.624 [2024-11-06 14:10:53.835812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.835828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.843351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee4de8 00:29:07.624 [2024-11-06 14:10:53.844357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.844372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.851862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee7c50 00:29:07.624 [2024-11-06 14:10:53.852808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.852824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.860358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eed4e8 00:29:07.624 [2024-11-06 14:10:53.861354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.861370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.868879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee27f0 00:29:07.624 [2024-11-06 14:10:53.869906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.869925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.877406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee38d0 00:29:07.624 [2024-11-06 14:10:53.878410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.878427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.885953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eea680 00:29:07.624 [2024-11-06 14:10:53.886969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.886986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.624 [2024-11-06 14:10:53.894489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eebb98 00:29:07.624 [2024-11-06 14:10:53.895451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.624 [2024-11-06 14:10:53.895467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.886 [2024-11-06 14:10:53.902993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee4de8 00:29:07.886 [2024-11-06 14:10:53.903950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.886 [2024-11-06 14:10:53.903966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.886 [2024-11-06 14:10:53.911517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee7c50 00:29:07.886 [2024-11-06 14:10:53.912521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.886 [2024-11-06 14:10:53.912537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.886 [2024-11-06 14:10:53.920007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eed4e8 00:29:07.886 [2024-11-06 14:10:53.921000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.886 [2024-11-06 14:10:53.921016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.886 [2024-11-06 14:10:53.928496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee27f0 00:29:07.886 [2024-11-06 14:10:53.929488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.886 [2024-11-06 14:10:53.929504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:53.937003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee38d0 00:29:07.887 [2024-11-06 14:10:53.938010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:53.938026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:53.945509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eea680 00:29:07.887 [2024-11-06 14:10:53.946497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:53.946515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:53.953994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eebb98 00:29:07.887 [2024-11-06 14:10:53.954960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:53.954976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:53.962492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee4de8 00:29:07.887 [2024-11-06 14:10:53.963481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:53.963497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:53.970974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee7c50 00:29:07.887 [2024-11-06 14:10:53.971947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:53.971963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:53.979466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eed4e8 00:29:07.887 [2024-11-06 14:10:53.980458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:53.980474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:53.987385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef0788 00:29:07.887 [2024-11-06 14:10:53.988307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:53.988322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:53.996781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef35f0 00:29:07.887 [2024-11-06 14:10:53.997872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:53.997888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:54.005406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee9168 00:29:07.887 [2024-11-06 14:10:54.006510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:54.006526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:54.013871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eea248 00:29:07.887 [2024-11-06 14:10:54.014961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:54.014977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:54.022316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef0350 00:29:07.887 [2024-11-06 14:10:54.023443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:54.023459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:54.030797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eedd58 00:29:07.887 [2024-11-06 14:10:54.031909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:54.031926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:54.039256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eeee38 00:29:07.887 [2024-11-06 14:10:54.040386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:54.040402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:54.047721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eed0b0 00:29:07.887 [2024-11-06 14:10:54.048806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:54.048823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:54.056172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eebfd0 00:29:07.887 [2024-11-06 14:10:54.057294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:54.057310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:54.064635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efda78 00:29:07.887 [2024-11-06 14:10:54.065737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:54.065757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:54.073114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:07.887 [2024-11-06 14:10:54.074190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:54.074207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:54.081603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef8a50 00:29:07.887 [2024-11-06 14:10:54.082728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:54.082747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:54.090084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef7970 00:29:07.887 [2024-11-06 14:10:54.091195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:54.091211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:54.098549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef6890 00:29:07.887 [2024-11-06 14:10:54.099650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:54.099666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:54.107182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef57b0 00:29:07.887 [2024-11-06 14:10:54.108343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:54.108359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:54.115668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef46d0 00:29:07.887 [2024-11-06 14:10:54.116752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:54.116769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:54.124140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef0ff8 00:29:07.887 [2024-11-06 14:10:54.125227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:54.125244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:54.132597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef3a28 00:29:07.887 [2024-11-06 14:10:54.133673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:54.133689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:54.141079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee8d30 00:29:07.887 [2024-11-06 14:10:54.142199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:54.142215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.887 [2024-11-06 14:10:54.149531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee9e10 00:29:07.887 [2024-11-06 14:10:54.150653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.887 [2024-11-06 14:10:54.150670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.888 [2024-11-06 14:10:54.157982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eeaef0 00:29:07.888 [2024-11-06 14:10:54.159090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.888 [2024-11-06 14:10:54.159106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.149 [2024-11-06 14:10:54.166467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eed920 00:29:08.149 [2024-11-06 14:10:54.167592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.149 [2024-11-06 14:10:54.167612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.149 [2024-11-06 14:10:54.174941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eeea00 00:29:08.150 [2024-11-06 14:10:54.176020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.176036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.183398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eefae0 00:29:08.150 [2024-11-06 14:10:54.184477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.184494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.191860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eec408 00:29:08.150 [2024-11-06 14:10:54.192966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.192983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.200329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efd640 00:29:08.150 [2024-11-06 14:10:54.201421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.201437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.208783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efe2e8 00:29:08.150 [2024-11-06 14:10:54.209902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.209919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.217256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef8e88 00:29:08.150 [2024-11-06 14:10:54.218355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.218371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.225732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef7da8 00:29:08.150 [2024-11-06 14:10:54.226814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.226831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.234222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef6cc8 00:29:08.150 [2024-11-06 14:10:54.235341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.235357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.242684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef5be8 00:29:08.150 [2024-11-06 14:10:54.243786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.243803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.251162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef4b08 00:29:08.150 [2024-11-06 14:10:54.252247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.252264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.259641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef0bc0 00:29:08.150 [2024-11-06 14:10:54.260765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.260781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.268115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef1ca0 00:29:08.150 [2024-11-06 14:10:54.269211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.269227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.276573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef35f0 00:29:08.150 [2024-11-06 14:10:54.277697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.277714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.285043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee9168 00:29:08.150 [2024-11-06 14:10:54.286153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.286170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.293507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eea248 00:29:08.150 [2024-11-06 14:10:54.294611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.294628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.301975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef0350 00:29:08.150 [2024-11-06 14:10:54.303096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.303113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.310442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eedd58 00:29:08.150 [2024-11-06 14:10:54.311526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.311542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.318906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eeee38 00:29:08.150 [2024-11-06 14:10:54.320024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.320040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.327383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eed0b0 00:29:08.150 [2024-11-06 14:10:54.328493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.328509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.335838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eebfd0 00:29:08.150 [2024-11-06 14:10:54.336920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.336936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.344287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efda78 00:29:08.150 [2024-11-06 14:10:54.345351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.345368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.352770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.150 [2024-11-06 14:10:54.353875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.353891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.361246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef8a50 00:29:08.150 [2024-11-06 14:10:54.362353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.362370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.369743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef7970 00:29:08.150 [2024-11-06 14:10:54.370860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.370876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.378201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef6890 00:29:08.150 [2024-11-06 14:10:54.379329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.379345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.386645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef57b0 00:29:08.150 [2024-11-06 14:10:54.387760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.387779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.150 [2024-11-06 14:10:54.395110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef46d0 00:29:08.150 [2024-11-06 14:10:54.396233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.150 [2024-11-06 14:10:54.396251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.151 [2024-11-06 14:10:54.403612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef0ff8 00:29:08.151 [2024-11-06 14:10:54.404734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.151 [2024-11-06 14:10:54.404755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.151 [2024-11-06 14:10:54.412089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef3a28 00:29:08.151 [2024-11-06 14:10:54.413175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.151 [2024-11-06 14:10:54.413191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.151 [2024-11-06 14:10:54.420548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee8d30 00:29:08.151 [2024-11-06 14:10:54.421652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.151 [2024-11-06 14:10:54.421668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.413 [2024-11-06 14:10:54.429010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee9e10 00:29:08.413 [2024-11-06 14:10:54.430135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.413 [2024-11-06 14:10:54.430152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.413 [2024-11-06 14:10:54.437457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eeaef0 00:29:08.413 [2024-11-06 14:10:54.438573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.413 [2024-11-06 14:10:54.438589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.413 [2024-11-06 14:10:54.445933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eed920 00:29:08.413 [2024-11-06 14:10:54.447055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.413 [2024-11-06 14:10:54.447072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.413 [2024-11-06 14:10:54.454404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eeea00 00:29:08.413 [2024-11-06 14:10:54.455518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.413 [2024-11-06 14:10:54.455534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.413 [2024-11-06 14:10:54.462894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eefae0 00:29:08.413 [2024-11-06 14:10:54.464022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.413 [2024-11-06 14:10:54.464038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.413 [2024-11-06 14:10:54.471369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eec408 00:29:08.413 [2024-11-06 14:10:54.472471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.413 [2024-11-06 14:10:54.472488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.413 [2024-11-06 14:10:54.479825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efd640 00:29:08.413 [2024-11-06 14:10:54.480945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.413 [2024-11-06 14:10:54.480962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.413 [2024-11-06 14:10:54.488299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efe2e8 00:29:08.413 [2024-11-06 14:10:54.489424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.413 [2024-11-06 14:10:54.489441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.413 [2024-11-06 14:10:54.496793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef8e88 00:29:08.413 [2024-11-06 14:10:54.497888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.413 [2024-11-06 14:10:54.497905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.413 [2024-11-06 14:10:54.505264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef7da8 00:29:08.413 [2024-11-06 14:10:54.506385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.506401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.513736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef6cc8 00:29:08.414 [2024-11-06 14:10:54.514808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.514824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.522183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef5be8 00:29:08.414 [2024-11-06 14:10:54.523306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.523323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.530635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef4b08 00:29:08.414 [2024-11-06 14:10:54.531755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.531772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.539124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef0bc0 00:29:08.414 [2024-11-06 14:10:54.540230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.540248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.547588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef1ca0 00:29:08.414 [2024-11-06 14:10:54.548713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.548729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.556082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef35f0 00:29:08.414 [2024-11-06 14:10:54.557182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.557197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.564722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ee9168 00:29:08.414 [2024-11-06 14:10:54.565850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.565866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.573177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eea248 00:29:08.414 [2024-11-06 14:10:54.574297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.574313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.581637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016ef0350 00:29:08.414 [2024-11-06 14:10:54.582749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.582765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.590128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eedd58 00:29:08.414 [2024-11-06 14:10:54.591230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.591246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.598602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eeee38 00:29:08.414 [2024-11-06 14:10:54.599713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.599729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.607078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eed0b0 00:29:08.414 [2024-11-06 14:10:54.608183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.608202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.615540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016eebfd0 00:29:08.414 [2024-11-06 14:10:54.616662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.616678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.624003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efda78 00:29:08.414 [2024-11-06 14:10:54.625111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.625127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.632470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.414 30025.00 IOPS, 117.29 MiB/s [2024-11-06T13:10:54.694Z] [2024-11-06 14:10:54.633581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.633597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.640957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.414 [2024-11-06 14:10:54.642049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.642065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.649448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.414 [2024-11-06 14:10:54.650554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.650569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.657908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.414 [2024-11-06 14:10:54.659006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.659021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.666435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.414 [2024-11-06 14:10:54.667556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.667573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.674913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.414 [2024-11-06 14:10:54.676037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.676053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.414 [2024-11-06 14:10:54.683375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.414 [2024-11-06 14:10:54.684473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.414 [2024-11-06 14:10:54.684489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.691852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.692953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.692969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.700324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.701419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.701435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.708778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.709887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.709903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.717245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.718345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.718361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.725721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.726815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.726831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.734192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.735308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.735324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.742666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.743771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.743787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.751201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.752296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.752312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.759680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.760799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.760815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.768174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.769281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.769297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.776657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.777774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.777790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.785144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.786242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.786260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.793613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.794714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.794730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.802068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.803166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.803182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.810533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.811638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.811654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.819050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.820152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.820167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.827528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.828625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.828643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.836003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.837117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.837133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.844470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.845587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.845603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.852923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.854033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.854049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.861426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.862533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.862549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.869914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.871024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.871040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.878408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.879511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.879527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.886879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.888002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.888019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.895317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.676 [2024-11-06 14:10:54.896414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.676 [2024-11-06 14:10:54.896430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.676 [2024-11-06 14:10:54.903795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.677 [2024-11-06 14:10:54.904909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.677 [2024-11-06 14:10:54.904924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.677 [2024-11-06 14:10:54.912264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.677 [2024-11-06 14:10:54.913360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.677 [2024-11-06 14:10:54.913376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.677 [2024-11-06 14:10:54.920743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.677 [2024-11-06 14:10:54.921855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.677 [2024-11-06 14:10:54.921870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.677 [2024-11-06 14:10:54.929254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.677 [2024-11-06 14:10:54.930356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.677 [2024-11-06 14:10:54.930371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.677 [2024-11-06 14:10:54.937708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.677 [2024-11-06 14:10:54.938815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.677 [2024-11-06 14:10:54.938831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.677 [2024-11-06 14:10:54.946154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.677 [2024-11-06 14:10:54.947273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.677 [2024-11-06 14:10:54.947289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:54.954642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:54.955714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:54.955730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:54.963118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:54.964219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:54.964234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:54.971597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:54.972717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:54.972733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:54.980064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:54.981175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:54.981191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:54.988529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:54.989628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:54.989645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:54.997015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:54.998117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:54.998132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:55.005510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:55.006621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:55.006637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:55.013990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:55.015099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:55.015115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:55.022458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:55.023549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:55.023565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:55.030939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:55.032020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:55.032036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:55.039404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:55.040498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:55.040514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:55.047886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:55.049001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:55.049017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:55.056350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:55.057465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:55.057481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:55.064838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:55.065907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:55.065922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:55.073294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:55.074385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:55.074401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:55.081736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:55.082811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:55.082827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:55.090210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:55.091311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:55.091327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:55.098673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:55.099788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:55.099805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:55.107307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:55.108409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:55.108425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:55.115777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:55.116884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:55.116900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:55.124223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:55.125345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:55.125364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.939 [2024-11-06 14:10:55.132696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.939 [2024-11-06 14:10:55.133799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.939 [2024-11-06 14:10:55.133814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.940 [2024-11-06 14:10:55.141178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.940 [2024-11-06 14:10:55.142277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.940 [2024-11-06 14:10:55.142293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.940 [2024-11-06 14:10:55.149636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.940 [2024-11-06 14:10:55.150757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.940 [2024-11-06 14:10:55.150773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.940 [2024-11-06 14:10:55.158141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.940 [2024-11-06 14:10:55.159244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.940 [2024-11-06 14:10:55.159259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.940 [2024-11-06 14:10:55.166602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.940 [2024-11-06 14:10:55.167721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.940 [2024-11-06 14:10:55.167738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.940 [2024-11-06 14:10:55.175072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.940 [2024-11-06 14:10:55.176166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.940 [2024-11-06 14:10:55.176181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.940 [2024-11-06 14:10:55.183554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.940 [2024-11-06 14:10:55.184653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.940 [2024-11-06 14:10:55.184669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.940 [2024-11-06 14:10:55.192028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.940 [2024-11-06 14:10:55.193125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.940 [2024-11-06 14:10:55.193141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.940 [2024-11-06 14:10:55.200498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.940 [2024-11-06 14:10:55.201611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.940 [2024-11-06 14:10:55.201627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.940 [2024-11-06 14:10:55.208971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:08.940 [2024-11-06 14:10:55.210045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.940 [2024-11-06 14:10:55.210061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.201 [2024-11-06 14:10:55.217423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.201 [2024-11-06 14:10:55.218520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.201 [2024-11-06 14:10:55.218536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.201 [2024-11-06 14:10:55.225886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.201 [2024-11-06 14:10:55.227000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.201 [2024-11-06 14:10:55.227016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.201 [2024-11-06 14:10:55.234346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.201 [2024-11-06 14:10:55.235465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.201 [2024-11-06 14:10:55.235480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.201 [2024-11-06 14:10:55.242813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.201 [2024-11-06 14:10:55.243912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.201 [2024-11-06 14:10:55.243928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.201 [2024-11-06 14:10:55.251274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.201 [2024-11-06 14:10:55.252395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.201 [2024-11-06 14:10:55.252411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.201 [2024-11-06 14:10:55.259731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.201 [2024-11-06 14:10:55.260826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.201 [2024-11-06 14:10:55.260842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.201 [2024-11-06 14:10:55.268182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.201 [2024-11-06 14:10:55.269282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.201 [2024-11-06 14:10:55.269298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.201 [2024-11-06 14:10:55.276659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.201 [2024-11-06 14:10:55.277775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.201 [2024-11-06 14:10:55.277790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.201 [2024-11-06 14:10:55.285126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.201 [2024-11-06 14:10:55.286224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.201 [2024-11-06 14:10:55.286240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.201 [2024-11-06 14:10:55.293598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.201 [2024-11-06 14:10:55.294690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.201 [2024-11-06 14:10:55.294705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.201 [2024-11-06 14:10:55.302062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.303176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.303191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.310503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.311613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.311629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.318973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.320074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.320089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.327450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.328550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.328565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.335922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.337020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.337035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.344413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.345531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.345548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.352858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.353956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.353972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.361320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.362379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.362395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.369793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.370891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.370907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.378250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.379352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.379367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.386712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.387811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.387828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.395179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.396281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.396297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.403626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.404743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.404762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.412113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.413211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.413226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.420590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.421708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.421723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.429068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.430168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.430184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.437541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.438645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.438660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.445995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.447094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.447109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.454484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.455593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.455609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.462971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.464030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.464046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.202 [2024-11-06 14:10:55.471419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.202 [2024-11-06 14:10:55.472511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-11-06 14:10:55.472527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.464 [2024-11-06 14:10:55.479898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.464 [2024-11-06 14:10:55.480991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.464 [2024-11-06 14:10:55.481007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.464 [2024-11-06 14:10:55.488355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.464 [2024-11-06 14:10:55.489461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.464 [2024-11-06 14:10:55.489476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.464 [2024-11-06 14:10:55.496807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.464 [2024-11-06 14:10:55.497898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.464 [2024-11-06 14:10:55.497914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.464 [2024-11-06 14:10:55.505270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.464 [2024-11-06 14:10:55.506383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.464 [2024-11-06 14:10:55.506399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.464 [2024-11-06 14:10:55.513728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.464 [2024-11-06 14:10:55.514838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.464 [2024-11-06 14:10:55.514853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.464 [2024-11-06 14:10:55.522184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.464 [2024-11-06 14:10:55.523242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.464 [2024-11-06 14:10:55.523257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.464 [2024-11-06 14:10:55.530664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.464 [2024-11-06 14:10:55.531757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.464 [2024-11-06 14:10:55.531773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.464 [2024-11-06 14:10:55.539103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.464 [2024-11-06 14:10:55.540195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.465 [2024-11-06 14:10:55.540211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.465 [2024-11-06 14:10:55.547558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.465 [2024-11-06 14:10:55.548658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.465 [2024-11-06 14:10:55.548674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.465 [2024-11-06 14:10:55.556034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.465 [2024-11-06 14:10:55.557133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.465 [2024-11-06 14:10:55.557149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.465 [2024-11-06 14:10:55.564495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.465 [2024-11-06 14:10:55.565568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.465 [2024-11-06 14:10:55.565586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.465 [2024-11-06 14:10:55.572964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.465 [2024-11-06 14:10:55.574055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.465 [2024-11-06 14:10:55.574070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.465 [2024-11-06 14:10:55.581416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.465 [2024-11-06 14:10:55.582534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.465 [2024-11-06 14:10:55.582549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.465 [2024-11-06 14:10:55.589863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.465 [2024-11-06 14:10:55.590919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.465 [2024-11-06 14:10:55.590935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.465 [2024-11-06 14:10:55.598334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.465 [2024-11-06 14:10:55.599430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.465 [2024-11-06 14:10:55.599446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.465 [2024-11-06 14:10:55.606791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.465 [2024-11-06 14:10:55.607899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.465 [2024-11-06 14:10:55.607914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.465 [2024-11-06 14:10:55.615274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.465 [2024-11-06 14:10:55.616368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.465 [2024-11-06 14:10:55.616384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.465 [2024-11-06 14:10:55.623730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.465 [2024-11-06 14:10:55.624830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.465 [2024-11-06 14:10:55.624846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.465 [2024-11-06 14:10:55.632177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75e750) with pdu=0x200016efdeb0 00:29:09.465 [2024-11-06 14:10:55.633850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.465 [2024-11-06 14:10:55.633866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.465 30089.00 IOPS, 117.54 MiB/s 00:29:09.465 Latency(us) 00:29:09.465 [2024-11-06T13:10:55.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.465 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:09.465 nvme0n1 : 2.00 30102.64 117.59 0.00 0.00 4247.29 2211.84 8956.59 00:29:09.465 [2024-11-06T13:10:55.745Z] =================================================================================================================== 00:29:09.465 [2024-11-06T13:10:55.745Z] Total : 30102.64 117.59 0.00 0.00 4247.29 2211.84 8956.59 00:29:09.465 { 00:29:09.465 "results": [ 00:29:09.465 { 00:29:09.465 "job": "nvme0n1", 00:29:09.465 "core_mask": "0x2", 00:29:09.465 "workload": "randwrite", 00:29:09.465 "status": "finished", 00:29:09.465 "queue_depth": 128, 00:29:09.465 "io_size": 4096, 00:29:09.465 "runtime": 2.003346, 00:29:09.465 "iops": 30102.638286147274, 00:29:09.465 "mibps": 117.58843080526279, 00:29:09.465 "io_failed": 0, 00:29:09.465 "io_timeout": 0, 00:29:09.465 "avg_latency_us": 4247.287427895511, 00:29:09.465 "min_latency_us": 2211.84, 00:29:09.465 "max_latency_us": 8956.586666666666 00:29:09.465 } 00:29:09.465 ], 00:29:09.465 "core_count": 1 00:29:09.465 } 00:29:09.465 14:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:09.465 14:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:09.465 14:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:09.465 | .driver_specific 00:29:09.465 | .nvme_error 00:29:09.465 | .status_code 00:29:09.465 | .command_transient_transport_error' 00:29:09.465 14:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:09.726 14:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:29:09.726 14:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2587192 00:29:09.726 14:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2587192 ']' 00:29:09.726 14:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2587192 00:29:09.726 14:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:09.726 14:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:09.726 14:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2587192 00:29:09.726 14:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:09.726 14:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:09.726 14:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2587192' 00:29:09.726 killing process with pid 2587192 00:29:09.726 14:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2587192 00:29:09.726 Received shutdown signal, test time was about 2.000000 seconds 00:29:09.726 00:29:09.726 Latency(us) 00:29:09.726 [2024-11-06T13:10:56.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.726 [2024-11-06T13:10:56.006Z] =================================================================================================================== 00:29:09.726 [2024-11-06T13:10:56.006Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:09.726 14:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2587192 00:29:09.726 14:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:09.987 14:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:09.987 14:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:09.987 14:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:09.987 14:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:09.987 14:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2587876 00:29:09.987 14:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2587876 /var/tmp/bperf.sock 00:29:09.987 14:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2587876 ']' 00:29:09.987 14:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:09.987 14:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:09.987 14:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:09.987 14:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:09.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:09.987 14:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:09.987 14:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.987 [2024-11-06 14:10:56.065102] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:29:09.987 [2024-11-06 14:10:56.065171] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2587876 ] 00:29:09.987 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:09.987 Zero copy mechanism will not be used. 00:29:09.987 [2024-11-06 14:10:56.148995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.987 [2024-11-06 14:10:56.178512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.929 14:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:10.929 14:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:10.929 14:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:10.929 14:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:10.929 14:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:10.929 14:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.929 14:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:10.929 14:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.929 14:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.929 14:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.190 nvme0n1 00:29:11.190 14:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:11.190 14:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.190 14:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.190 14:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.190 14:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:11.190 14:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:11.453 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:11.453 Zero copy mechanism will not be used. 00:29:11.453 Running I/O for 2 seconds... 00:29:11.453 [2024-11-06 14:10:57.535153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.535359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.535384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.540085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.540298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.540316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.544935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.545292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.545311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.550196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.550269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.550285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.555750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.555808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.555824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.560931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.561012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.561028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.567605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.567830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.567846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.573423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.573504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.573519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.579789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.579834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.579849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.586288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.586355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.586370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.591897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.591978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.591993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.598078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.598123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.598139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.604735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.604843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.604858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.611557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.611604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.611620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.617235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.617520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.617536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.624663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.624831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.624846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.630245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.630297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.630318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.635353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.635415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.635430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.641089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.641133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.641148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.648701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.648876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.648891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.656642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.656753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.656769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.662220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.662305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.662320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.667024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.667119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.667134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.671793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.671891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.671906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.453 [2024-11-06 14:10:57.676448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.453 [2024-11-06 14:10:57.676498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.453 [2024-11-06 14:10:57.676514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.454 [2024-11-06 14:10:57.681466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.454 [2024-11-06 14:10:57.681528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.454 [2024-11-06 14:10:57.681543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.454 [2024-11-06 14:10:57.685572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.454 [2024-11-06 14:10:57.685642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.454 [2024-11-06 14:10:57.685657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.454 [2024-11-06 14:10:57.689414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.454 [2024-11-06 14:10:57.689485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.454 [2024-11-06 14:10:57.689500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.454 [2024-11-06 14:10:57.693221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.454 [2024-11-06 14:10:57.693285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.454 [2024-11-06 14:10:57.693300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.454 [2024-11-06 14:10:57.698397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.454 [2024-11-06 14:10:57.698470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.454 [2024-11-06 14:10:57.698485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.454 [2024-11-06 14:10:57.702425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.454 [2024-11-06 14:10:57.702497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.454 [2024-11-06 14:10:57.702512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.454 [2024-11-06 14:10:57.705777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.454 [2024-11-06 14:10:57.705832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.454 [2024-11-06 14:10:57.705847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.454 [2024-11-06 14:10:57.708978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.454 [2024-11-06 14:10:57.709032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.454 [2024-11-06 14:10:57.709047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.454 [2024-11-06 14:10:57.712521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.454 [2024-11-06 14:10:57.712594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.454 [2024-11-06 14:10:57.712609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.454 [2024-11-06 14:10:57.715797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.454 [2024-11-06 14:10:57.715870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.454 [2024-11-06 14:10:57.715885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.454 [2024-11-06 14:10:57.719108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.454 [2024-11-06 14:10:57.719179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.454 [2024-11-06 14:10:57.719194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.454 [2024-11-06 14:10:57.722882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.454 [2024-11-06 14:10:57.722931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.454 [2024-11-06 14:10:57.722946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.454 [2024-11-06 14:10:57.726580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.454 [2024-11-06 14:10:57.726634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.454 [2024-11-06 14:10:57.726649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.715 [2024-11-06 14:10:57.730007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.715 [2024-11-06 14:10:57.730054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-11-06 14:10:57.730069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.715 [2024-11-06 14:10:57.733268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.715 [2024-11-06 14:10:57.733333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-11-06 14:10:57.733347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.715 [2024-11-06 14:10:57.736718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.715 [2024-11-06 14:10:57.736787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-11-06 14:10:57.736802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.715 [2024-11-06 14:10:57.739987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.715 [2024-11-06 14:10:57.740042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-11-06 14:10:57.740057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.715 [2024-11-06 14:10:57.743596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.743648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.743666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.746691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.746733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.746754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.749907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.749949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.749965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.752986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.753051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.753066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.756177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.756221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.756236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.759190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.759242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.759257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.762619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.762677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.762693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.766501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.766600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.766615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.773291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.773357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.773372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.777802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.777887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.777903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.784140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.784196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.784210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.789862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.789981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.789996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.798333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.798596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.798611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.806234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.806549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.806565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.814456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.814707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.814722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.822929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.823090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.823105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.829544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.829719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.829734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.835382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.835497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.835513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.841203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.841280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.841294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.849594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.849699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.849713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.857018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.857124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.857139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.864830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.865000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.865015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.871266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.871381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.871396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.878000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.878066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.878080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.883591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.883701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.883716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.889422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.889528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.889543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.716 [2024-11-06 14:10:57.894303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.716 [2024-11-06 14:10:57.894348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-11-06 14:10:57.894365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.717 [2024-11-06 14:10:57.899221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.717 [2024-11-06 14:10:57.899387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.717 [2024-11-06 14:10:57.899402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.717 [2024-11-06 14:10:57.908339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.717 [2024-11-06 14:10:57.908432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.717 [2024-11-06 14:10:57.908447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.717 [2024-11-06 14:10:57.913076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.717 [2024-11-06 14:10:57.913149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.717 [2024-11-06 14:10:57.913164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.717 [2024-11-06 14:10:57.917636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.717 [2024-11-06 14:10:57.917728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.717 [2024-11-06 14:10:57.917743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.717 [2024-11-06 14:10:57.922144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.717 [2024-11-06 14:10:57.922236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.717 [2024-11-06 14:10:57.922251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.717 [2024-11-06 14:10:57.927047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.717 [2024-11-06 14:10:57.927134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.717 [2024-11-06 14:10:57.927148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.717 [2024-11-06 14:10:57.931653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.717 [2024-11-06 14:10:57.931735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.717 [2024-11-06 14:10:57.931756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.717 [2024-11-06 14:10:57.936123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.717 [2024-11-06 14:10:57.936237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.717 [2024-11-06 14:10:57.936252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.717 [2024-11-06 14:10:57.940320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.717 [2024-11-06 14:10:57.940388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.717 [2024-11-06 14:10:57.940403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.717 [2024-11-06 14:10:57.944365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.717 [2024-11-06 14:10:57.944473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.717 [2024-11-06 14:10:57.944488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.717 [2024-11-06 14:10:57.949743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.717 [2024-11-06 14:10:57.949831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.717 [2024-11-06 14:10:57.949846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.717 [2024-11-06 14:10:57.955162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.717 [2024-11-06 14:10:57.955404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.717 [2024-11-06 14:10:57.955419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.717 [2024-11-06 14:10:57.962384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.717 [2024-11-06 14:10:57.962460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.717 [2024-11-06 14:10:57.962476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.717 [2024-11-06 14:10:57.968820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.717 [2024-11-06 14:10:57.968886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.717 [2024-11-06 14:10:57.968901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.717 [2024-11-06 14:10:57.976755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.717 [2024-11-06 14:10:57.977078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.717 [2024-11-06 14:10:57.977094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.717 [2024-11-06 14:10:57.983514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.717 [2024-11-06 14:10:57.983625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.717 [2024-11-06 14:10:57.983641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.717 [2024-11-06 14:10:57.987783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.717 [2024-11-06 14:10:57.987859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.717 [2024-11-06 14:10:57.987875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.717 [2024-11-06 14:10:57.991540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.717 [2024-11-06 14:10:57.991628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.717 [2024-11-06 14:10:57.991643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:57.995166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:57.995236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:57.995251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:57.998916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:57.998965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:57.998979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:58.001979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:58.002044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:58.002059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:58.004942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:58.005018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:58.005033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:58.007856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:58.007936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:58.007952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:58.010778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:58.010883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:58.010898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:58.013682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:58.013733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:58.013830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:58.016331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:58.016399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:58.016417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:58.018934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:58.019012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:58.019028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:58.021862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:58.021917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:58.021932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:58.024592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:58.024644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:58.024659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:58.027367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:58.027416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:58.027431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:58.030213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:58.030311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:58.030326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:58.033331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:58.033372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:58.033386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:58.036258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:58.036304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:58.036319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:58.038963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:58.039005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:58.039020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:58.041725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:58.041783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:58.041801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:58.044441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:58.044488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:58.044503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.981 [2024-11-06 14:10:58.047218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.981 [2024-11-06 14:10:58.047273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.981 [2024-11-06 14:10:58.047288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.050177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.050269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.050284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.053422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.053484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.053499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.055966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.056024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.056039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.058519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.058570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.058585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.061067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.061114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.061129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.063579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.063629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.063643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.066260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.066303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.066319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.069215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.069282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.069297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.072682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.072731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.072751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.075560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.075605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.075621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.078307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.078346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.078361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.080960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.081004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.081020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.083678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.083719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.083735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.086352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.086404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.086419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.088990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.089051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.089065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.091724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.091779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.091794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.094404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.094465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.094480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.097079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.097139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.097155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.099743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.099804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.099819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.102598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.102653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.102668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.105474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.105518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.105533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.108156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.108196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.108211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.110926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.110977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.110992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.113618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.113673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.113691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.116406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.116472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.116486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.119194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.119235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.119250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.121879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.982 [2024-11-06 14:10:58.121937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.982 [2024-11-06 14:10:58.121952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.982 [2024-11-06 14:10:58.124667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.124713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.124728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.127415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.127456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.127471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.130119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.130161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.130176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.132880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.132939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.132954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.135584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.135624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.135639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.138271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.138340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.138356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.141464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.141513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.141528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.144667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.144741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.144761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.147222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.147274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.147289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.149772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.149822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.149837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.152325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.152381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.152396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.154910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.154956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.154971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.157673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.157729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.157744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.161188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.161245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.161261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.163755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.163806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.163821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.166301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.166352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.166367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.168849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.168919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.168935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.171470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.171525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.171540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.174575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.174622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.174637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.177721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.177780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.177796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.180360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.180424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.180439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.182972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.183017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.183032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.185621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.185668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.185686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.188300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.188352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.188368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.190932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.190972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.190987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.193587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.193638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.193653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.196189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.983 [2024-11-06 14:10:58.196230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.983 [2024-11-06 14:10:58.196246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.983 [2024-11-06 14:10:58.198827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.198873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.198888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.201474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.201525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.201540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.204155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.204221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.204236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.206856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.206912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.206927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.210258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.210327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.210342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.212908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.212960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.212975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.215439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.215479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.215494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.217958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.218016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.218031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.220481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.220536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.220551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.223042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.223098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.223113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.225552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.225608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.225623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.228093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.228139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.228154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.230640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.230689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.230704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.233153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.233204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.233219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.235688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.235749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.235763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.238347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.238429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.238444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.241657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.241714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.241729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.244230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.244289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.244304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.246744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.246793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.246809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.249280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.249339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.249354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.251812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.251859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.251873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.984 [2024-11-06 14:10:58.254326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:11.984 [2024-11-06 14:10:58.254374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.984 [2024-11-06 14:10:58.254392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.247 [2024-11-06 14:10:58.257087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.257167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.257182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.260730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.260929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.260945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.265795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.265884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.265899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.274221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.274450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.274465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.283418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.283740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.283761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.291952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.292128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.292143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.300170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.300275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.300291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.303716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.303810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.303825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.306505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.306601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.306616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.309245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.309350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.309366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.311973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.312052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.312067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.314738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.314845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.314860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.317371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.317445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.317460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.319944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.320026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.320041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.322523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.322619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.322634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.325520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.325656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.325671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.328292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.328408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.328423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.331180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.331285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.331300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.335641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.335728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.335743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.341791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.341939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.341954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.346651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.346866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.346881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.352302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.352631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.352646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.357124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.357202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.357217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.359923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.359982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.359997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.362692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.362765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.362780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.365287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.365349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.365367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.248 [2024-11-06 14:10:58.367904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.248 [2024-11-06 14:10:58.367956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-11-06 14:10:58.367971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.370659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.370711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.370726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.373249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.373303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.373318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.375838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.375893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.375908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.378400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.378457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.378472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.381004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.381058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.381073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.383539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.383594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.383608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.386313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.386383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.386399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.390067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.390176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.390191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.396445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.396676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.396691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.405938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.406134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.406150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.415554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.415805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.415820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.424607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.424910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.424925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.434283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.434573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.434588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.443008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.443221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.443236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.451781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.451936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.451951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.460579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.460843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.460864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.466830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.466991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.467005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.475164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.475473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.475489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.484741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.484892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.484907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.492039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.492178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.492193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.497857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.497921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.497935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.501944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.501990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.502005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.505608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.505729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.505744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.508955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.509054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.509069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.512293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.512369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.512387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.515588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.515631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.515646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.249 [2024-11-06 14:10:58.518856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.249 [2024-11-06 14:10:58.518923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.249 [2024-11-06 14:10:58.518939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.250 [2024-11-06 14:10:58.522272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.250 [2024-11-06 14:10:58.522350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.250 [2024-11-06 14:10:58.522365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.513 7362.00 IOPS, 920.25 MiB/s [2024-11-06T13:10:58.793Z] [2024-11-06 14:10:58.526142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.513 [2024-11-06 14:10:58.526215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.513 [2024-11-06 14:10:58.526230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.513 [2024-11-06 14:10:58.528960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.513 [2024-11-06 14:10:58.529018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.513 [2024-11-06 14:10:58.529033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.513 [2024-11-06 14:10:58.532033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.513 [2024-11-06 14:10:58.532077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.513 [2024-11-06 14:10:58.532092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.513 [2024-11-06 14:10:58.535385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.513 [2024-11-06 14:10:58.535443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.513 [2024-11-06 14:10:58.535459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.513 [2024-11-06 14:10:58.538714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.513 [2024-11-06 14:10:58.538762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.513 [2024-11-06 14:10:58.538777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.513 [2024-11-06 14:10:58.542215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.513 [2024-11-06 14:10:58.542261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.513 [2024-11-06 14:10:58.542276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.513 [2024-11-06 14:10:58.545701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.513 [2024-11-06 14:10:58.545814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.513 [2024-11-06 14:10:58.545829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.513 [2024-11-06 14:10:58.550201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.513 [2024-11-06 14:10:58.550248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.513 [2024-11-06 14:10:58.550264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.513 [2024-11-06 14:10:58.553193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.513 [2024-11-06 14:10:58.553239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.513 [2024-11-06 14:10:58.553255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.513 [2024-11-06 14:10:58.556042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.513 [2024-11-06 14:10:58.556108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.513 [2024-11-06 14:10:58.556122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.513 [2024-11-06 14:10:58.558806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.513 [2024-11-06 14:10:58.558877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.513 [2024-11-06 14:10:58.558892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.513 [2024-11-06 14:10:58.561626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.513 [2024-11-06 14:10:58.561699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.513 [2024-11-06 14:10:58.561713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.513 [2024-11-06 14:10:58.564596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.513 [2024-11-06 14:10:58.564640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.513 [2024-11-06 14:10:58.564655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.513 [2024-11-06 14:10:58.567537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.513 [2024-11-06 14:10:58.567584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.513 [2024-11-06 14:10:58.567599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.513 [2024-11-06 14:10:58.570434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.513 [2024-11-06 14:10:58.570475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.513 [2024-11-06 14:10:58.570490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.513 [2024-11-06 14:10:58.573327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.513 [2024-11-06 14:10:58.573407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.513 [2024-11-06 14:10:58.573422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.513 [2024-11-06 14:10:58.576162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.513 [2024-11-06 14:10:58.576204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.513 [2024-11-06 14:10:58.576219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.513 [2024-11-06 14:10:58.578849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.578907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.578922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.581394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.581441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.581456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.583955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.584006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.584021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.586478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.586561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.586576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.589039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.589100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.589115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.591546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.591599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.591617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.594287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.594344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.594359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.597133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.597180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.597195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.600405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.600455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.600470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.603120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.603180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.603195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.606030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.606094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.606108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.609095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.609158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.609173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.611624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.611685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.611700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.614137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.614193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.614208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.616679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.616744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.616764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.619415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.619467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.619482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.622843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.622949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.622963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.626107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.626166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.626181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.628820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.628894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.628909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.631608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.631658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.631673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.634423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.634485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.634500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.637184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.637228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.637242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.639935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.639980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.639995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.642715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.642767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.642782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.645435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.645507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.645521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.648187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.648239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.648254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.650960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.514 [2024-11-06 14:10:58.651006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.514 [2024-11-06 14:10:58.651021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.514 [2024-11-06 14:10:58.653674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.653720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.653735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.656405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.656510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.656526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.659817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.659886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.659900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.664680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.664791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.664806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.669315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.669419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.669437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.673588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.673668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.673683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.678654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.678743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.678763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.683258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.683387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.683401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.688344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.688435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.688450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.693414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.693520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.693535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.699071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.699162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.699177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.704182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.704365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.704380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.709232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.709322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.709337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.714317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.714397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.714412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.719395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.719479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.719494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.724862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.724948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.724963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.730488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.730556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.730570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.735971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.736139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.736154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.741638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.741735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.741754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.746075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.746154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.746169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.750231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.750332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.750347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.754296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.754391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.754408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.758682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.758787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.758802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.762792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.762895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.762910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.766691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.766762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.766777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.770799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.770909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.515 [2024-11-06 14:10:58.770923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.515 [2024-11-06 14:10:58.775808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.515 [2024-11-06 14:10:58.775919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.516 [2024-11-06 14:10:58.775934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.516 [2024-11-06 14:10:58.780910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.516 [2024-11-06 14:10:58.781013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.516 [2024-11-06 14:10:58.781028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.516 [2024-11-06 14:10:58.787137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.516 [2024-11-06 14:10:58.787371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.516 [2024-11-06 14:10:58.787386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.779 [2024-11-06 14:10:58.793151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.779 [2024-11-06 14:10:58.793362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.779 [2024-11-06 14:10:58.793377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.779 [2024-11-06 14:10:58.798246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.779 [2024-11-06 14:10:58.798432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.779 [2024-11-06 14:10:58.798448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.779 [2024-11-06 14:10:58.803301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.779 [2024-11-06 14:10:58.803437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.779 [2024-11-06 14:10:58.803452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.779 [2024-11-06 14:10:58.808442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.779 [2024-11-06 14:10:58.808639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.779 [2024-11-06 14:10:58.808654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.779 [2024-11-06 14:10:58.813526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.779 [2024-11-06 14:10:58.813709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.779 [2024-11-06 14:10:58.813725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.779 [2024-11-06 14:10:58.819409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.779 [2024-11-06 14:10:58.819536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.779 [2024-11-06 14:10:58.819552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.779 [2024-11-06 14:10:58.823397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.779 [2024-11-06 14:10:58.823490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.823506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.828169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.828255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.828270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.832579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.832687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.832702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.837003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.837195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.837211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.842080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.842265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.842280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.846220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.846344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.846360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.850525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.850627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.850643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.855308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.855589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.855606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.861740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.861886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.861901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.865583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.865664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.865679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.868866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.868967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.868982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.872008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.872122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.872137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.875113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.875237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.875255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.878262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.878355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.878370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.881331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.881430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.881445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.884371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.884455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.884470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.887011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.887096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.887111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.889596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.889693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.889708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.892459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.892535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.892550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.895287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.895404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.895420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.898039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.898125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.898140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.901158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.901228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.901247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.904852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.904942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.904957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.907790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.907865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.907882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.910449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.780 [2024-11-06 14:10:58.910522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.780 [2024-11-06 14:10:58.910537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.780 [2024-11-06 14:10:58.913064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.913147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.913162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.915670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.915759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.915774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.918231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.918320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.918335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.920855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.920939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.920954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.923595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.923675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.923690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.926174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.926246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.926261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.928763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.928847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.928863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.931354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.931425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.931440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.933939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.934016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.934031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.936551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.936630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.936645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.939124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.939204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.939219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.941744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.941835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.941850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.944328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.944404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.944419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.946900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.946979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.946995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.949558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.949642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.949657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.952191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.952263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.952278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.954762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.954845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.954860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.957355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.957428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.957444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.959935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.960021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.960036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.962570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.962640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.962655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.965165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.965228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.965243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.967743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.967827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.967841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.970322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.970401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.970421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.972965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.973038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.973054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.975562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.975641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.975656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.978122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.978199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.978214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.781 [2024-11-06 14:10:58.980620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.781 [2024-11-06 14:10:58.980693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.781 [2024-11-06 14:10:58.980708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:58.983135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:58.983216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:58.983231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:58.985709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:58.985787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:58.985803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:58.988242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:58.988316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:58.988331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:58.990850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:58.990934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:58.990949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:58.993515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:58.993600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:58.993616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:58.996442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:58.996522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:58.996537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:58.999704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:58.999788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:58.999804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:59.002312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:59.002392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:59.002408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:59.004818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:59.004906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:59.004921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:59.007304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:59.007386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:59.007401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:59.009811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:59.009894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:59.009909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:59.012307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:59.012388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:59.012403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:59.014811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:59.014895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:59.014910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:59.017304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:59.017382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:59.017397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:59.019861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:59.019931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:59.019947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:59.022500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:59.022598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:59.022614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:59.025509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:59.025593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:59.025609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:59.028736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:59.028825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:59.028840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:59.031407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:59.031495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:59.031510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:59.033959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:59.034044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:59.034059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:59.036581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:59.036673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:59.036689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:59.039952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:59.040101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:59.040118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:59.044544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:59.044708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:59.044723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.782 [2024-11-06 14:10:59.051974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:12.782 [2024-11-06 14:10:59.052102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.782 [2024-11-06 14:10:59.052118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.046 [2024-11-06 14:10:59.055365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.046 [2024-11-06 14:10:59.055514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.046 [2024-11-06 14:10:59.055529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.046 [2024-11-06 14:10:59.059690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.046 [2024-11-06 14:10:59.059817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.046 [2024-11-06 14:10:59.059833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.046 [2024-11-06 14:10:59.063601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.046 [2024-11-06 14:10:59.063744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.046 [2024-11-06 14:10:59.063764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.046 [2024-11-06 14:10:59.067273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.046 [2024-11-06 14:10:59.067394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.046 [2024-11-06 14:10:59.067410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.046 [2024-11-06 14:10:59.070146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.046 [2024-11-06 14:10:59.070261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.046 [2024-11-06 14:10:59.070276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.046 [2024-11-06 14:10:59.072842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.046 [2024-11-06 14:10:59.072944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.046 [2024-11-06 14:10:59.072960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.046 [2024-11-06 14:10:59.075466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.046 [2024-11-06 14:10:59.075575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.046 [2024-11-06 14:10:59.075591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.046 [2024-11-06 14:10:59.078053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.046 [2024-11-06 14:10:59.078305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.046 [2024-11-06 14:10:59.078320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.046 [2024-11-06 14:10:59.083686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.046 [2024-11-06 14:10:59.083766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.046 [2024-11-06 14:10:59.083781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.046 [2024-11-06 14:10:59.086698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.046 [2024-11-06 14:10:59.086767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.046 [2024-11-06 14:10:59.086782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.046 [2024-11-06 14:10:59.089920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.046 [2024-11-06 14:10:59.089993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.046 [2024-11-06 14:10:59.090008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.046 [2024-11-06 14:10:59.095354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.046 [2024-11-06 14:10:59.095463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.046 [2024-11-06 14:10:59.095478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.046 [2024-11-06 14:10:59.099589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.046 [2024-11-06 14:10:59.099659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.099674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.103518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.103611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.103626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.107505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.107548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.107563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.112072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.112114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.112129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.115276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.115330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.115345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.118427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.118492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.118507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.121289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.121350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.121364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.124525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.124604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.124619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.127628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.127699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.127714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.130356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.130416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.130431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.132961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.133038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.133053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.135527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.135599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.135616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.138104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.138161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.138176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.140619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.140682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.140697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.143183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.143235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.143250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.145721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.145764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.145780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.148342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.148411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.148425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.151099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.151167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.151181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.155951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.156050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.156064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.160525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.160637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.160652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.163528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.163635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.163650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.166922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.167034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.167049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.169504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.169589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.169604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.172044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.172127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.172142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.174759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.174856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.174871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.180015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.180075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.180090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.182861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.182941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.047 [2024-11-06 14:10:59.182956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.047 [2024-11-06 14:10:59.185596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.047 [2024-11-06 14:10:59.185670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.185685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.188248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.188306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.188321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.190951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.191038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.191053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.193664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.193741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.193760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.196387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.196469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.196483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.199120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.199207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.199222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.201826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.201910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.201925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.204510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.204587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.204601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.207569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.207620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.207635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.210401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.210452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.210466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.213056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.213128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.213145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.215742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.215808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.215823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.218429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.218488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.218502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.221158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.221240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.221255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.223854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.223908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.223923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.226797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.226910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.226925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.230504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.230616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.230631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.236132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.236359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.236374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.242003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.242078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.242092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.245019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.245116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.245132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.247917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.247988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.248003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.250797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.250918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.250933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.254455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.254659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.254674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.259499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.259607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.259622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.266884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.266998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.267013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.271775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.271950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.271966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.276770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.276864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.048 [2024-11-06 14:10:59.276879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.048 [2024-11-06 14:10:59.283021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.048 [2024-11-06 14:10:59.283335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.049 [2024-11-06 14:10:59.283352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.049 [2024-11-06 14:10:59.287411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.049 [2024-11-06 14:10:59.287498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.049 [2024-11-06 14:10:59.287513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.049 [2024-11-06 14:10:59.291208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.049 [2024-11-06 14:10:59.291295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.049 [2024-11-06 14:10:59.291310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.049 [2024-11-06 14:10:59.294919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.049 [2024-11-06 14:10:59.294990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.049 [2024-11-06 14:10:59.295005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.049 [2024-11-06 14:10:59.298603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.049 [2024-11-06 14:10:59.298681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.049 [2024-11-06 14:10:59.298696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.049 [2024-11-06 14:10:59.302601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.049 [2024-11-06 14:10:59.302694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.049 [2024-11-06 14:10:59.302709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.049 [2024-11-06 14:10:59.305348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.049 [2024-11-06 14:10:59.305425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.049 [2024-11-06 14:10:59.305441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.049 [2024-11-06 14:10:59.308214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.049 [2024-11-06 14:10:59.308302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.049 [2024-11-06 14:10:59.308316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.049 [2024-11-06 14:10:59.311068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.049 [2024-11-06 14:10:59.311145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.049 [2024-11-06 14:10:59.311160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.049 [2024-11-06 14:10:59.313929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.049 [2024-11-06 14:10:59.314023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.049 [2024-11-06 14:10:59.314041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.049 [2024-11-06 14:10:59.316979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.049 [2024-11-06 14:10:59.317054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.049 [2024-11-06 14:10:59.317069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.049 [2024-11-06 14:10:59.320393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.049 [2024-11-06 14:10:59.320445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.049 [2024-11-06 14:10:59.320460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.312 [2024-11-06 14:10:59.324184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.312 [2024-11-06 14:10:59.324279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.312 [2024-11-06 14:10:59.324294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.312 [2024-11-06 14:10:59.327840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.312 [2024-11-06 14:10:59.327926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.312 [2024-11-06 14:10:59.327940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.312 [2024-11-06 14:10:59.334679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.312 [2024-11-06 14:10:59.334959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.312 [2024-11-06 14:10:59.334973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.312 [2024-11-06 14:10:59.339039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.312 [2024-11-06 14:10:59.339127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.312 [2024-11-06 14:10:59.339142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.312 [2024-11-06 14:10:59.342053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.312 [2024-11-06 14:10:59.342127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.312 [2024-11-06 14:10:59.342142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.312 [2024-11-06 14:10:59.345162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.312 [2024-11-06 14:10:59.345256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.312 [2024-11-06 14:10:59.345271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.312 [2024-11-06 14:10:59.348423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.312 [2024-11-06 14:10:59.348518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.312 [2024-11-06 14:10:59.348533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.312 [2024-11-06 14:10:59.351759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.312 [2024-11-06 14:10:59.351817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.312 [2024-11-06 14:10:59.351832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.312 [2024-11-06 14:10:59.354996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.312 [2024-11-06 14:10:59.355060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.312 [2024-11-06 14:10:59.355075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.312 [2024-11-06 14:10:59.358116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.312 [2024-11-06 14:10:59.358191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.312 [2024-11-06 14:10:59.358206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.312 [2024-11-06 14:10:59.361377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.312 [2024-11-06 14:10:59.361461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.312 [2024-11-06 14:10:59.361476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.312 [2024-11-06 14:10:59.364897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.312 [2024-11-06 14:10:59.364954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.312 [2024-11-06 14:10:59.364968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.312 [2024-11-06 14:10:59.367499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.312 [2024-11-06 14:10:59.367566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.312 [2024-11-06 14:10:59.367580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.312 [2024-11-06 14:10:59.370088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.370165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.370180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.372666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.372721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.372736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.375521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.375577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.375592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.378358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.378465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.378480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.381875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.381951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.381966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.384782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.384891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.384906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.387670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.387771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.387786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.390598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.390718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.390733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.394046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.394137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.394152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.396993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.397062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.397077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.399947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.400000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.400018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.402815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.402876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.402890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.405806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.405881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.405895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.408709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.408811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.408826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.411612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.411680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.411695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.415131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.415198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.415213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.418249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.418320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.418334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.421089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.421133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.421148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.423723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.423835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.423850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.426901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.426987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.427001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.433567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.433651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.433665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.440959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.441036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.441052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.448838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.449053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.449068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.455595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.455685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.455700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.460325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.460453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.460468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.463992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.464055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.464070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.467196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.467311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.313 [2024-11-06 14:10:59.467326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.313 [2024-11-06 14:10:59.470771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.313 [2024-11-06 14:10:59.470856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.314 [2024-11-06 14:10:59.470871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.314 [2024-11-06 14:10:59.474025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.314 [2024-11-06 14:10:59.474110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.314 [2024-11-06 14:10:59.474125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.314 [2024-11-06 14:10:59.477332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.314 [2024-11-06 14:10:59.477426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.314 [2024-11-06 14:10:59.477440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.314 [2024-11-06 14:10:59.480619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.314 [2024-11-06 14:10:59.480723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.314 [2024-11-06 14:10:59.480737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.314 [2024-11-06 14:10:59.483779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.314 [2024-11-06 14:10:59.483879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.314 [2024-11-06 14:10:59.483893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.314 [2024-11-06 14:10:59.487366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.314 [2024-11-06 14:10:59.487469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.314 [2024-11-06 14:10:59.487484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.314 [2024-11-06 14:10:59.494835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.314 [2024-11-06 14:10:59.495012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.314 [2024-11-06 14:10:59.495027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.314 [2024-11-06 14:10:59.500459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.314 [2024-11-06 14:10:59.500566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.314 [2024-11-06 14:10:59.500581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.314 [2024-11-06 14:10:59.503500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.314 [2024-11-06 14:10:59.503630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.314 [2024-11-06 14:10:59.503645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.314 [2024-11-06 14:10:59.506436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.314 [2024-11-06 14:10:59.506534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.314 [2024-11-06 14:10:59.506552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.314 [2024-11-06 14:10:59.509637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.314 [2024-11-06 14:10:59.509717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.314 [2024-11-06 14:10:59.509732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.314 [2024-11-06 14:10:59.514093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.314 [2024-11-06 14:10:59.514267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.314 [2024-11-06 14:10:59.514282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.314 [2024-11-06 14:10:59.519004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.314 [2024-11-06 14:10:59.519140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.314 [2024-11-06 14:10:59.519155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.314 [2024-11-06 14:10:59.522030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.314 [2024-11-06 14:10:59.522134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.314 [2024-11-06 14:10:59.522149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.314 [2024-11-06 14:10:59.525055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x75ea90) with pdu=0x200016efef90 00:29:13.314 [2024-11-06 14:10:59.525194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.314 [2024-11-06 14:10:59.525209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.314 8143.00 IOPS, 1017.88 MiB/s 00:29:13.314 Latency(us) 00:29:13.314 [2024-11-06T13:10:59.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.314 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:13.314 nvme0n1 : 2.00 8142.51 1017.81 0.00 0.00 1961.94 1092.27 10922.67 00:29:13.314 [2024-11-06T13:10:59.594Z] =================================================================================================================== 00:29:13.314 [2024-11-06T13:10:59.594Z] Total : 8142.51 1017.81 0.00 0.00 1961.94 1092.27 10922.67 00:29:13.314 { 00:29:13.314 "results": [ 00:29:13.314 { 00:29:13.314 "job": "nvme0n1", 00:29:13.314 "core_mask": "0x2", 00:29:13.314 "workload": "randwrite", 00:29:13.314 "status": "finished", 00:29:13.314 "queue_depth": 16, 00:29:13.314 "io_size": 131072, 00:29:13.314 "runtime": 2.002455, 00:29:13.314 "iops": 8142.505075020413, 00:29:13.314 "mibps": 1017.8131343775516, 00:29:13.314 "io_failed": 0, 00:29:13.314 "io_timeout": 0, 00:29:13.314 "avg_latency_us": 1961.939200654196, 00:29:13.314 "min_latency_us": 1092.2666666666667, 00:29:13.314 "max_latency_us": 10922.666666666666 00:29:13.314 } 00:29:13.314 ], 00:29:13.314 "core_count": 1 00:29:13.314 } 00:29:13.314 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:13.314 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:13.314 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:13.314 | .driver_specific 00:29:13.314 | .nvme_error 00:29:13.314 | .status_code 00:29:13.314 | .command_transient_transport_error' 00:29:13.314 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:13.574 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 525 > 0 )) 00:29:13.574 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2587876 00:29:13.574 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2587876 ']' 00:29:13.574 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2587876 00:29:13.574 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:13.574 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:13.574 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2587876 00:29:13.574 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:13.574 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:13.574 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2587876' 00:29:13.574 killing process with pid 2587876 00:29:13.574 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2587876 00:29:13.574 Received shutdown signal, test time was about 2.000000 seconds 00:29:13.574 00:29:13.574 Latency(us) 00:29:13.574 [2024-11-06T13:10:59.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.574 [2024-11-06T13:10:59.854Z] =================================================================================================================== 00:29:13.574 [2024-11-06T13:10:59.854Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:13.574 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2587876 00:29:13.836 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2585476 00:29:13.836 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2585476 ']' 00:29:13.836 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2585476 00:29:13.836 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:13.836 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:13.836 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2585476 00:29:13.836 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:13.836 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:13.836 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2585476' 00:29:13.836 killing process with pid 2585476 00:29:13.836 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2585476 00:29:13.836 14:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2585476 00:29:13.836 00:29:13.836 real 0m16.504s 00:29:13.836 user 0m32.509s 00:29:13.836 sys 0m3.777s 00:29:13.836 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:13.836 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:13.836 ************************************ 00:29:13.836 END TEST nvmf_digest_error 00:29:13.836 ************************************ 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:14.098 rmmod nvme_tcp 00:29:14.098 rmmod nvme_fabrics 00:29:14.098 rmmod nvme_keyring 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2585476 ']' 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2585476 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 2585476 ']' 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 2585476 00:29:14.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2585476) - No such process 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 2585476 is not found' 00:29:14.098 Process with pid 2585476 is not found 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.098 14:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.011 14:11:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:16.272 00:29:16.272 real 0m43.482s 00:29:16.272 user 1m7.805s 00:29:16.272 sys 0m13.577s 00:29:16.272 14:11:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:16.272 14:11:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:16.272 ************************************ 00:29:16.272 END TEST nvmf_digest 00:29:16.272 ************************************ 00:29:16.272 14:11:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:16.272 14:11:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:16.272 14:11:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:16.272 14:11:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:16.272 14:11:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:16.272 14:11:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:16.272 14:11:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.272 ************************************ 00:29:16.272 START TEST nvmf_bdevperf 00:29:16.272 ************************************ 00:29:16.272 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:16.272 * Looking for test storage... 00:29:16.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:16.272 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:16.272 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:29:16.272 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:16.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.535 --rc genhtml_branch_coverage=1 00:29:16.535 --rc genhtml_function_coverage=1 00:29:16.535 --rc genhtml_legend=1 00:29:16.535 --rc geninfo_all_blocks=1 00:29:16.535 --rc geninfo_unexecuted_blocks=1 00:29:16.535 00:29:16.535 ' 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:16.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.535 --rc genhtml_branch_coverage=1 00:29:16.535 --rc genhtml_function_coverage=1 00:29:16.535 --rc genhtml_legend=1 00:29:16.535 --rc geninfo_all_blocks=1 00:29:16.535 --rc geninfo_unexecuted_blocks=1 00:29:16.535 00:29:16.535 ' 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:16.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.535 --rc genhtml_branch_coverage=1 00:29:16.535 --rc genhtml_function_coverage=1 00:29:16.535 --rc genhtml_legend=1 00:29:16.535 --rc geninfo_all_blocks=1 00:29:16.535 --rc geninfo_unexecuted_blocks=1 00:29:16.535 00:29:16.535 ' 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:16.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.535 --rc genhtml_branch_coverage=1 00:29:16.535 --rc genhtml_function_coverage=1 00:29:16.535 --rc genhtml_legend=1 00:29:16.535 --rc geninfo_all_blocks=1 00:29:16.535 --rc geninfo_unexecuted_blocks=1 00:29:16.535 00:29:16.535 ' 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.535 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:16.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:16.536 14:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.700 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.700 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.700 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.700 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.700 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.700 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.700 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.700 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.700 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.700 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:24.700 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.700 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:24.700 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.700 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:24.701 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:24.701 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:24.701 Found net devices under 0000:31:00.0: cvl_0_0 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:24.701 Found net devices under 0000:31:00.1: cvl_0_1 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.701 14:11:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:24.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:29:24.701 00:29:24.701 --- 10.0.0.2 ping statistics --- 00:29:24.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.701 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:29:24.701 00:29:24.701 --- 10.0.0.1 ping statistics --- 00:29:24.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.701 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:24.701 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:24.702 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:24.702 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.702 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2592924 00:29:24.702 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2592924 00:29:24.702 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:24.702 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 2592924 ']' 00:29:24.702 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.702 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:24.702 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.702 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:24.702 14:11:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.702 [2024-11-06 14:11:10.278868] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:29:24.702 [2024-11-06 14:11:10.278950] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.702 [2024-11-06 14:11:10.380539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:24.702 [2024-11-06 14:11:10.432817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.702 [2024-11-06 14:11:10.432865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.702 [2024-11-06 14:11:10.432874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.702 [2024-11-06 14:11:10.432881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.702 [2024-11-06 14:11:10.432888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.702 [2024-11-06 14:11:10.434929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.702 [2024-11-06 14:11:10.435219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.702 [2024-11-06 14:11:10.435221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.963 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:24.963 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:24.963 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:24.963 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:24.963 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.963 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.963 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:24.963 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.963 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.963 [2024-11-06 14:11:11.160961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.963 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.964 Malloc0 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.964 [2024-11-06 14:11:11.232075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:24.964 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:24.964 { 00:29:24.964 "params": { 00:29:24.964 "name": "Nvme$subsystem", 00:29:24.964 "trtype": "$TEST_TRANSPORT", 00:29:24.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:24.964 "adrfam": "ipv4", 00:29:24.964 "trsvcid": "$NVMF_PORT", 00:29:24.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:24.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:24.964 "hdgst": ${hdgst:-false}, 00:29:24.964 "ddgst": ${ddgst:-false} 00:29:24.964 }, 00:29:24.964 "method": "bdev_nvme_attach_controller" 00:29:24.964 } 00:29:24.964 EOF 00:29:24.964 )") 00:29:25.225 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:25.225 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:25.225 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:25.225 14:11:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:25.225 "params": { 00:29:25.225 "name": "Nvme1", 00:29:25.225 "trtype": "tcp", 00:29:25.225 "traddr": "10.0.0.2", 00:29:25.225 "adrfam": "ipv4", 00:29:25.225 "trsvcid": "4420", 00:29:25.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:25.225 "hdgst": false, 00:29:25.225 "ddgst": false 00:29:25.225 }, 00:29:25.225 "method": "bdev_nvme_attach_controller" 00:29:25.225 }' 00:29:25.225 [2024-11-06 14:11:11.291362] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:29:25.225 [2024-11-06 14:11:11.291437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2593149 ] 00:29:25.225 [2024-11-06 14:11:11.386349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.225 [2024-11-06 14:11:11.439842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.486 Running I/O for 1 seconds... 00:29:26.429 8620.00 IOPS, 33.67 MiB/s 00:29:26.429 Latency(us) 00:29:26.429 [2024-11-06T13:11:12.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.429 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:26.429 Verification LBA range: start 0x0 length 0x4000 00:29:26.429 Nvme1n1 : 1.01 8637.15 33.74 0.00 0.00 14755.70 2935.47 13052.59 00:29:26.429 [2024-11-06T13:11:12.709Z] =================================================================================================================== 00:29:26.429 [2024-11-06T13:11:12.709Z] Total : 8637.15 33.74 0.00 0.00 14755.70 2935.47 13052.59 00:29:26.691 14:11:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2593386 00:29:26.691 14:11:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:26.691 14:11:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:26.691 14:11:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:26.691 14:11:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:26.691 14:11:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:26.691 14:11:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.691 14:11:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.691 { 00:29:26.691 "params": { 00:29:26.691 "name": "Nvme$subsystem", 00:29:26.691 "trtype": "$TEST_TRANSPORT", 00:29:26.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.691 "adrfam": "ipv4", 00:29:26.691 "trsvcid": "$NVMF_PORT", 00:29:26.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.691 "hdgst": ${hdgst:-false}, 00:29:26.691 "ddgst": ${ddgst:-false} 00:29:26.691 }, 00:29:26.691 "method": "bdev_nvme_attach_controller" 00:29:26.691 } 00:29:26.691 EOF 00:29:26.691 )") 00:29:26.691 14:11:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:26.691 14:11:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:26.691 14:11:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:26.691 14:11:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:26.691 "params": { 00:29:26.691 "name": "Nvme1", 00:29:26.691 "trtype": "tcp", 00:29:26.691 "traddr": "10.0.0.2", 00:29:26.691 "adrfam": "ipv4", 00:29:26.691 "trsvcid": "4420", 00:29:26.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:26.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:26.691 "hdgst": false, 00:29:26.691 "ddgst": false 00:29:26.691 }, 00:29:26.691 "method": "bdev_nvme_attach_controller" 00:29:26.691 }' 00:29:26.691 [2024-11-06 14:11:12.844384] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:29:26.691 [2024-11-06 14:11:12.844441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2593386 ] 00:29:26.691 [2024-11-06 14:11:12.931890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.691 [2024-11-06 14:11:12.967315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.952 Running I/O for 15 seconds... 00:29:29.280 11003.00 IOPS, 42.98 MiB/s [2024-11-06T13:11:15.824Z] 11141.00 IOPS, 43.52 MiB/s [2024-11-06T13:11:15.824Z] 14:11:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2592924 00:29:29.544 14:11:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:29.544 [2024-11-06 14:11:15.809868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.544 [2024-11-06 14:11:15.809908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.544 [2024-11-06 14:11:15.809929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.544 [2024-11-06 14:11:15.809940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.544 [2024-11-06 14:11:15.809952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.544 [2024-11-06 14:11:15.809960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.544 [2024-11-06 14:11:15.809970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.544 [2024-11-06 14:11:15.809979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.544 [2024-11-06 14:11:15.809989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.544 [2024-11-06 14:11:15.809998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.544 [2024-11-06 14:11:15.810011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.544 [2024-11-06 14:11:15.810018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.544 [2024-11-06 14:11:15.810029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.544 [2024-11-06 14:11:15.810036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.544 [2024-11-06 14:11:15.810047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.544 [2024-11-06 14:11:15.810061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.544 [2024-11-06 14:11:15.810071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.544 [2024-11-06 14:11:15.810079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.544 [2024-11-06 14:11:15.810090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.544 [2024-11-06 14:11:15.810098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.544 [2024-11-06 14:11:15.810109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.544 [2024-11-06 14:11:15.810118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.544 [2024-11-06 14:11:15.810129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.544 [2024-11-06 14:11:15.810139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.544 [2024-11-06 14:11:15.810150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.544 [2024-11-06 14:11:15.810159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.544 [2024-11-06 14:11:15.810172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.544 [2024-11-06 14:11:15.810181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.544 [2024-11-06 14:11:15.810192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.544 [2024-11-06 14:11:15.810201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.545 [2024-11-06 14:11:15.810220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.545 [2024-11-06 14:11:15.810238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.545 [2024-11-06 14:11:15.810378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.545 [2024-11-06 14:11:15.810395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.545 [2024-11-06 14:11:15.810411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.545 [2024-11-06 14:11:15.810428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.545 [2024-11-06 14:11:15.810444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.545 [2024-11-06 14:11:15.810461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.545 [2024-11-06 14:11:15.810477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.545 [2024-11-06 14:11:15.810633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.545 [2024-11-06 14:11:15.810824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.545 [2024-11-06 14:11:15.810833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.810840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.810850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.810857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.810867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.810874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.810883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.810890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.810900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.810907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.810917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.810924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.810933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.810942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.810952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.810959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.810969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.810976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.810986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.810993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.546 [2024-11-06 14:11:15.811043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.546 [2024-11-06 14:11:15.811437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.546 [2024-11-06 14:11:15.811446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.811983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.811993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.812000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.812010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.812020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.812030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.812037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.812047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.812054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.547 [2024-11-06 14:11:15.812064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.547 [2024-11-06 14:11:15.812072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.548 [2024-11-06 14:11:15.812081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.548 [2024-11-06 14:11:15.812089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.548 [2024-11-06 14:11:15.812099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.548 [2024-11-06 14:11:15.812106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.548 [2024-11-06 14:11:15.812116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.548 [2024-11-06 14:11:15.812123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.548 [2024-11-06 14:11:15.812132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2550 is same with the state(6) to be set 00:29:29.548 [2024-11-06 14:11:15.812141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:29.548 [2024-11-06 14:11:15.812147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:29.548 [2024-11-06 14:11:15.812154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109048 len:8 PRP1 0x0 PRP2 0x0 00:29:29.548 [2024-11-06 14:11:15.812165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.548 [2024-11-06 14:11:15.815753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.548 [2024-11-06 14:11:15.815807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.548 [2024-11-06 14:11:15.816570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-06 14:11:15.816587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.548 [2024-11-06 14:11:15.816596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.548 [2024-11-06 14:11:15.816822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.548 [2024-11-06 14:11:15.817044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.548 [2024-11-06 14:11:15.817053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.548 [2024-11-06 14:11:15.817062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.548 [2024-11-06 14:11:15.817077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.811 [2024-11-06 14:11:15.829857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.811 [2024-11-06 14:11:15.830393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.811 [2024-11-06 14:11:15.830411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.811 [2024-11-06 14:11:15.830420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.811 [2024-11-06 14:11:15.830640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.811 [2024-11-06 14:11:15.830874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.811 [2024-11-06 14:11:15.830885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.811 [2024-11-06 14:11:15.830893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.811 [2024-11-06 14:11:15.830900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.811 [2024-11-06 14:11:15.843680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.811 [2024-11-06 14:11:15.844344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.811 [2024-11-06 14:11:15.844384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.811 [2024-11-06 14:11:15.844395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.811 [2024-11-06 14:11:15.844638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.811 [2024-11-06 14:11:15.844873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.811 [2024-11-06 14:11:15.844883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.811 [2024-11-06 14:11:15.844891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.811 [2024-11-06 14:11:15.844899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.811 [2024-11-06 14:11:15.857683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.811 [2024-11-06 14:11:15.858354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.811 [2024-11-06 14:11:15.858395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.811 [2024-11-06 14:11:15.858407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.811 [2024-11-06 14:11:15.858649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.811 [2024-11-06 14:11:15.858884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.811 [2024-11-06 14:11:15.858894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.811 [2024-11-06 14:11:15.858902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.811 [2024-11-06 14:11:15.858910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.811 [2024-11-06 14:11:15.871697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.811 [2024-11-06 14:11:15.872354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.811 [2024-11-06 14:11:15.872396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.811 [2024-11-06 14:11:15.872407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.811 [2024-11-06 14:11:15.872648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.811 [2024-11-06 14:11:15.872884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.811 [2024-11-06 14:11:15.872894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.811 [2024-11-06 14:11:15.872902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.811 [2024-11-06 14:11:15.872910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.811 [2024-11-06 14:11:15.885691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.811 [2024-11-06 14:11:15.886382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.811 [2024-11-06 14:11:15.886424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.812 [2024-11-06 14:11:15.886435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.812 [2024-11-06 14:11:15.886678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.812 [2024-11-06 14:11:15.886910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.812 [2024-11-06 14:11:15.886920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.812 [2024-11-06 14:11:15.886928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.812 [2024-11-06 14:11:15.886937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.812 [2024-11-06 14:11:15.899509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.812 [2024-11-06 14:11:15.900168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.812 [2024-11-06 14:11:15.900212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.812 [2024-11-06 14:11:15.900224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.812 [2024-11-06 14:11:15.900467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.812 [2024-11-06 14:11:15.900693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.812 [2024-11-06 14:11:15.900701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.812 [2024-11-06 14:11:15.900709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.812 [2024-11-06 14:11:15.900718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.812 [2024-11-06 14:11:15.913508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.812 [2024-11-06 14:11:15.914188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.812 [2024-11-06 14:11:15.914235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.812 [2024-11-06 14:11:15.914247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.812 [2024-11-06 14:11:15.914497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.812 [2024-11-06 14:11:15.914722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.812 [2024-11-06 14:11:15.914732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.812 [2024-11-06 14:11:15.914739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.812 [2024-11-06 14:11:15.914758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.812 [2024-11-06 14:11:15.927336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.812 [2024-11-06 14:11:15.927908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.812 [2024-11-06 14:11:15.927932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.812 [2024-11-06 14:11:15.927941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.812 [2024-11-06 14:11:15.928162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.812 [2024-11-06 14:11:15.928384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.812 [2024-11-06 14:11:15.928393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.812 [2024-11-06 14:11:15.928400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.812 [2024-11-06 14:11:15.928408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.812 [2024-11-06 14:11:15.941202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.812 [2024-11-06 14:11:15.941778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.812 [2024-11-06 14:11:15.941800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.812 [2024-11-06 14:11:15.941808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.812 [2024-11-06 14:11:15.942029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.812 [2024-11-06 14:11:15.942249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.812 [2024-11-06 14:11:15.942257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.812 [2024-11-06 14:11:15.942265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.812 [2024-11-06 14:11:15.942272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.812 [2024-11-06 14:11:15.955060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.812 [2024-11-06 14:11:15.955755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.812 [2024-11-06 14:11:15.955812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.812 [2024-11-06 14:11:15.955824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.812 [2024-11-06 14:11:15.956076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.812 [2024-11-06 14:11:15.956304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.812 [2024-11-06 14:11:15.956327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.812 [2024-11-06 14:11:15.956336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.812 [2024-11-06 14:11:15.956346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.812 [2024-11-06 14:11:15.968941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.812 [2024-11-06 14:11:15.969628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.812 [2024-11-06 14:11:15.969689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.812 [2024-11-06 14:11:15.969702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.812 [2024-11-06 14:11:15.969973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.812 [2024-11-06 14:11:15.970202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.812 [2024-11-06 14:11:15.970211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.812 [2024-11-06 14:11:15.970220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.812 [2024-11-06 14:11:15.970229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.812 [2024-11-06 14:11:15.982818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.812 [2024-11-06 14:11:15.983508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.812 [2024-11-06 14:11:15.983569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.812 [2024-11-06 14:11:15.983582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.812 [2024-11-06 14:11:15.983852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.812 [2024-11-06 14:11:15.984082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.812 [2024-11-06 14:11:15.984091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.812 [2024-11-06 14:11:15.984100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.812 [2024-11-06 14:11:15.984109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.812 [2024-11-06 14:11:15.996736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.812 [2024-11-06 14:11:15.997466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.813 [2024-11-06 14:11:15.997530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.813 [2024-11-06 14:11:15.997544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.813 [2024-11-06 14:11:15.997819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.813 [2024-11-06 14:11:15.998047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.813 [2024-11-06 14:11:15.998058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.813 [2024-11-06 14:11:15.998069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.813 [2024-11-06 14:11:15.998085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.813 [2024-11-06 14:11:16.010696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.813 [2024-11-06 14:11:16.011414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.813 [2024-11-06 14:11:16.011476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.813 [2024-11-06 14:11:16.011489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.813 [2024-11-06 14:11:16.011757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.813 [2024-11-06 14:11:16.011986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.813 [2024-11-06 14:11:16.011995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.813 [2024-11-06 14:11:16.012004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.813 [2024-11-06 14:11:16.012013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.813 [2024-11-06 14:11:16.024613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.813 [2024-11-06 14:11:16.025221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.813 [2024-11-06 14:11:16.025251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.813 [2024-11-06 14:11:16.025260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.813 [2024-11-06 14:11:16.025484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.813 [2024-11-06 14:11:16.025706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.813 [2024-11-06 14:11:16.025716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.813 [2024-11-06 14:11:16.025724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.813 [2024-11-06 14:11:16.025731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.813 [2024-11-06 14:11:16.038555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.813 [2024-11-06 14:11:16.039299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.813 [2024-11-06 14:11:16.039360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.813 [2024-11-06 14:11:16.039373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.813 [2024-11-06 14:11:16.039629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.813 [2024-11-06 14:11:16.039873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.813 [2024-11-06 14:11:16.039883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.813 [2024-11-06 14:11:16.039892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.813 [2024-11-06 14:11:16.039901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.813 [2024-11-06 14:11:16.052518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.813 [2024-11-06 14:11:16.053250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.813 [2024-11-06 14:11:16.053313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.813 [2024-11-06 14:11:16.053325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.813 [2024-11-06 14:11:16.053581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.813 [2024-11-06 14:11:16.053821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.813 [2024-11-06 14:11:16.053832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.813 [2024-11-06 14:11:16.053840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.813 [2024-11-06 14:11:16.053849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.813 [2024-11-06 14:11:16.066440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.813 [2024-11-06 14:11:16.067041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.813 [2024-11-06 14:11:16.067072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.813 [2024-11-06 14:11:16.067081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.813 [2024-11-06 14:11:16.067304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.813 [2024-11-06 14:11:16.067527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.813 [2024-11-06 14:11:16.067536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.813 [2024-11-06 14:11:16.067544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.813 [2024-11-06 14:11:16.067553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.813 [2024-11-06 14:11:16.080380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.813 [2024-11-06 14:11:16.080741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.813 [2024-11-06 14:11:16.080776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:29.813 [2024-11-06 14:11:16.080785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:29.813 [2024-11-06 14:11:16.081006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:29.813 [2024-11-06 14:11:16.081229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.813 [2024-11-06 14:11:16.081237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.813 [2024-11-06 14:11:16.081245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.813 [2024-11-06 14:11:16.081252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.078 [2024-11-06 14:11:16.094285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.078 [2024-11-06 14:11:16.094908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-11-06 14:11:16.094970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.078 [2024-11-06 14:11:16.094983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.078 [2024-11-06 14:11:16.095246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.078 [2024-11-06 14:11:16.095475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.078 [2024-11-06 14:11:16.095486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.078 [2024-11-06 14:11:16.095494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.078 [2024-11-06 14:11:16.095504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.078 [2024-11-06 14:11:16.108224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.078 [2024-11-06 14:11:16.108852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-11-06 14:11:16.108914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.078 [2024-11-06 14:11:16.108928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.078 [2024-11-06 14:11:16.109185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.078 [2024-11-06 14:11:16.109413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.078 [2024-11-06 14:11:16.109424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.078 [2024-11-06 14:11:16.109433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.078 [2024-11-06 14:11:16.109442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.078 [2024-11-06 14:11:16.122072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.078 [2024-11-06 14:11:16.122666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-11-06 14:11:16.122697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.078 [2024-11-06 14:11:16.122707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.078 [2024-11-06 14:11:16.122939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.078 [2024-11-06 14:11:16.123163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.078 [2024-11-06 14:11:16.123173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.078 [2024-11-06 14:11:16.123181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.078 [2024-11-06 14:11:16.123189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.078 9969.00 IOPS, 38.94 MiB/s [2024-11-06T13:11:16.358Z] [2024-11-06 14:11:16.136079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.078 [2024-11-06 14:11:16.136795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-11-06 14:11:16.136857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.078 [2024-11-06 14:11:16.136872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.078 [2024-11-06 14:11:16.137128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.078 [2024-11-06 14:11:16.137358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.078 [2024-11-06 14:11:16.137375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.078 [2024-11-06 14:11:16.137384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.078 [2024-11-06 14:11:16.137393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.078 [2024-11-06 14:11:16.150063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.078 [2024-11-06 14:11:16.150728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-11-06 14:11:16.150802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.078 [2024-11-06 14:11:16.150815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.078 [2024-11-06 14:11:16.151071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.078 [2024-11-06 14:11:16.151300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.078 [2024-11-06 14:11:16.151311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.078 [2024-11-06 14:11:16.151319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.078 [2024-11-06 14:11:16.151328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.078 [2024-11-06 14:11:16.163981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.078 [2024-11-06 14:11:16.164701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-11-06 14:11:16.164775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.078 [2024-11-06 14:11:16.164790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.078 [2024-11-06 14:11:16.165045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.078 [2024-11-06 14:11:16.165273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.078 [2024-11-06 14:11:16.165283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.078 [2024-11-06 14:11:16.165292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.078 [2024-11-06 14:11:16.165301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.078 [2024-11-06 14:11:16.177952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.078 [2024-11-06 14:11:16.178594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-06 14:11:16.178623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.079 [2024-11-06 14:11:16.178632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.079 [2024-11-06 14:11:16.178869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.079 [2024-11-06 14:11:16.179093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.079 [2024-11-06 14:11:16.179103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.079 [2024-11-06 14:11:16.179111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.079 [2024-11-06 14:11:16.179126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.079 [2024-11-06 14:11:16.191962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.079 [2024-11-06 14:11:16.192655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-06 14:11:16.192718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.079 [2024-11-06 14:11:16.192733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.079 [2024-11-06 14:11:16.193007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.079 [2024-11-06 14:11:16.193236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.079 [2024-11-06 14:11:16.193247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.079 [2024-11-06 14:11:16.193255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.079 [2024-11-06 14:11:16.193265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.079 [2024-11-06 14:11:16.205922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.079 [2024-11-06 14:11:16.206615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-06 14:11:16.206677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.079 [2024-11-06 14:11:16.206690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.079 [2024-11-06 14:11:16.206962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.079 [2024-11-06 14:11:16.207193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.079 [2024-11-06 14:11:16.207203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.079 [2024-11-06 14:11:16.207212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.079 [2024-11-06 14:11:16.207221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.079 [2024-11-06 14:11:16.219873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.079 [2024-11-06 14:11:16.220467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-06 14:11:16.220497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.079 [2024-11-06 14:11:16.220506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.079 [2024-11-06 14:11:16.220729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.079 [2024-11-06 14:11:16.220964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.079 [2024-11-06 14:11:16.220976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.079 [2024-11-06 14:11:16.220985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.079 [2024-11-06 14:11:16.220993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.079 [2024-11-06 14:11:16.233893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.079 [2024-11-06 14:11:16.234472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-06 14:11:16.234496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.079 [2024-11-06 14:11:16.234505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.079 [2024-11-06 14:11:16.234727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.079 [2024-11-06 14:11:16.234972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.079 [2024-11-06 14:11:16.234983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.079 [2024-11-06 14:11:16.234991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.079 [2024-11-06 14:11:16.234999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.079 [2024-11-06 14:11:16.247856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.079 [2024-11-06 14:11:16.248522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-06 14:11:16.248584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.079 [2024-11-06 14:11:16.248597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.079 [2024-11-06 14:11:16.248883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.079 [2024-11-06 14:11:16.249113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.079 [2024-11-06 14:11:16.249123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.079 [2024-11-06 14:11:16.249131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.079 [2024-11-06 14:11:16.249140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.079 [2024-11-06 14:11:16.261814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.079 [2024-11-06 14:11:16.262284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-06 14:11:16.262315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.079 [2024-11-06 14:11:16.262324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.079 [2024-11-06 14:11:16.262550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.079 [2024-11-06 14:11:16.262782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.079 [2024-11-06 14:11:16.262793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.079 [2024-11-06 14:11:16.262800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.079 [2024-11-06 14:11:16.262808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.079 [2024-11-06 14:11:16.275657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.079 [2024-11-06 14:11:16.276178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-06 14:11:16.276203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.079 [2024-11-06 14:11:16.276212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.079 [2024-11-06 14:11:16.276442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.079 [2024-11-06 14:11:16.276664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.079 [2024-11-06 14:11:16.276674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.079 [2024-11-06 14:11:16.276681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.079 [2024-11-06 14:11:16.276689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.079 [2024-11-06 14:11:16.289551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.079 [2024-11-06 14:11:16.290285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-06 14:11:16.290347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.079 [2024-11-06 14:11:16.290360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.079 [2024-11-06 14:11:16.290616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.079 [2024-11-06 14:11:16.290860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.079 [2024-11-06 14:11:16.290870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.079 [2024-11-06 14:11:16.290879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.079 [2024-11-06 14:11:16.290888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.079 [2024-11-06 14:11:16.303546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.079 [2024-11-06 14:11:16.304325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-06 14:11:16.304387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.080 [2024-11-06 14:11:16.304400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.080 [2024-11-06 14:11:16.304656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.080 [2024-11-06 14:11:16.304901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.080 [2024-11-06 14:11:16.304911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.080 [2024-11-06 14:11:16.304919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.080 [2024-11-06 14:11:16.304928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.080 [2024-11-06 14:11:16.317585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.080 [2024-11-06 14:11:16.318184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-06 14:11:16.318216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.080 [2024-11-06 14:11:16.318225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.080 [2024-11-06 14:11:16.318451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.080 [2024-11-06 14:11:16.318673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.080 [2024-11-06 14:11:16.318690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.080 [2024-11-06 14:11:16.318698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.080 [2024-11-06 14:11:16.318706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.080 [2024-11-06 14:11:16.331578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.080 [2024-11-06 14:11:16.332073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-06 14:11:16.332100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.080 [2024-11-06 14:11:16.332109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.080 [2024-11-06 14:11:16.332331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.080 [2024-11-06 14:11:16.332552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.080 [2024-11-06 14:11:16.332562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.080 [2024-11-06 14:11:16.332570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.080 [2024-11-06 14:11:16.332578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.080 [2024-11-06 14:11:16.345447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.080 [2024-11-06 14:11:16.345898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-06 14:11:16.345925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.080 [2024-11-06 14:11:16.345933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.080 [2024-11-06 14:11:16.346155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.080 [2024-11-06 14:11:16.346377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.080 [2024-11-06 14:11:16.346387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.080 [2024-11-06 14:11:16.346395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.080 [2024-11-06 14:11:16.346402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.343 [2024-11-06 14:11:16.359290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.343 [2024-11-06 14:11:16.359868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.343 [2024-11-06 14:11:16.359894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.343 [2024-11-06 14:11:16.359903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.343 [2024-11-06 14:11:16.360125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.343 [2024-11-06 14:11:16.360347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.343 [2024-11-06 14:11:16.360358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.343 [2024-11-06 14:11:16.360367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.343 [2024-11-06 14:11:16.360382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.343 [2024-11-06 14:11:16.373231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.343 [2024-11-06 14:11:16.373802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.343 [2024-11-06 14:11:16.373828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.343 [2024-11-06 14:11:16.373837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.343 [2024-11-06 14:11:16.374058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.343 [2024-11-06 14:11:16.374281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.343 [2024-11-06 14:11:16.374291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.343 [2024-11-06 14:11:16.374299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.343 [2024-11-06 14:11:16.374307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.343 [2024-11-06 14:11:16.387155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.343 [2024-11-06 14:11:16.387721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.343 [2024-11-06 14:11:16.387754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.343 [2024-11-06 14:11:16.387763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.343 [2024-11-06 14:11:16.387984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.343 [2024-11-06 14:11:16.388206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.343 [2024-11-06 14:11:16.388216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.343 [2024-11-06 14:11:16.388224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.343 [2024-11-06 14:11:16.388232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.343 [2024-11-06 14:11:16.401080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.343 [2024-11-06 14:11:16.401625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.343 [2024-11-06 14:11:16.401648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.343 [2024-11-06 14:11:16.401657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.343 [2024-11-06 14:11:16.401887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.343 [2024-11-06 14:11:16.402110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.343 [2024-11-06 14:11:16.402118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.343 [2024-11-06 14:11:16.402127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.343 [2024-11-06 14:11:16.402135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.343 [2024-11-06 14:11:16.414972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.344 [2024-11-06 14:11:16.415547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.344 [2024-11-06 14:11:16.415570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.344 [2024-11-06 14:11:16.415578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.344 [2024-11-06 14:11:16.415810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.344 [2024-11-06 14:11:16.416033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.344 [2024-11-06 14:11:16.416041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.344 [2024-11-06 14:11:16.416049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.344 [2024-11-06 14:11:16.416057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.344 [2024-11-06 14:11:16.428908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.344 [2024-11-06 14:11:16.429482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.344 [2024-11-06 14:11:16.429507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.344 [2024-11-06 14:11:16.429515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.344 [2024-11-06 14:11:16.429736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.344 [2024-11-06 14:11:16.429971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.344 [2024-11-06 14:11:16.429982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.344 [2024-11-06 14:11:16.429990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.344 [2024-11-06 14:11:16.429998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.344 [2024-11-06 14:11:16.442877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.344 [2024-11-06 14:11:16.443546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.344 [2024-11-06 14:11:16.443603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.344 [2024-11-06 14:11:16.443616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.344 [2024-11-06 14:11:16.443884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.344 [2024-11-06 14:11:16.444113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.344 [2024-11-06 14:11:16.444123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.344 [2024-11-06 14:11:16.444131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.344 [2024-11-06 14:11:16.444140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.344 [2024-11-06 14:11:16.456829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.344 [2024-11-06 14:11:16.457417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.344 [2024-11-06 14:11:16.457448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.344 [2024-11-06 14:11:16.457457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.344 [2024-11-06 14:11:16.457688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.344 [2024-11-06 14:11:16.457924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.344 [2024-11-06 14:11:16.457936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.344 [2024-11-06 14:11:16.457944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.344 [2024-11-06 14:11:16.457956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.344 [2024-11-06 14:11:16.470825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.344 [2024-11-06 14:11:16.471520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.344 [2024-11-06 14:11:16.471581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.344 [2024-11-06 14:11:16.471594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.344 [2024-11-06 14:11:16.471863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.344 [2024-11-06 14:11:16.472092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.344 [2024-11-06 14:11:16.472102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.344 [2024-11-06 14:11:16.472110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.344 [2024-11-06 14:11:16.472119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.344 [2024-11-06 14:11:16.484779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.344 [2024-11-06 14:11:16.485460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.344 [2024-11-06 14:11:16.485521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.344 [2024-11-06 14:11:16.485534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.344 [2024-11-06 14:11:16.485803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.344 [2024-11-06 14:11:16.486032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.344 [2024-11-06 14:11:16.486041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.344 [2024-11-06 14:11:16.486049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.344 [2024-11-06 14:11:16.486058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.344 [2024-11-06 14:11:16.498702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.344 [2024-11-06 14:11:16.499305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.344 [2024-11-06 14:11:16.499336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.344 [2024-11-06 14:11:16.499345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.344 [2024-11-06 14:11:16.499568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.344 [2024-11-06 14:11:16.499802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.344 [2024-11-06 14:11:16.499826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.344 [2024-11-06 14:11:16.499834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.344 [2024-11-06 14:11:16.499841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.344 [2024-11-06 14:11:16.512691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.344 [2024-11-06 14:11:16.513271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.344 [2024-11-06 14:11:16.513296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.344 [2024-11-06 14:11:16.513305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.344 [2024-11-06 14:11:16.513527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.344 [2024-11-06 14:11:16.513762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.344 [2024-11-06 14:11:16.513772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.344 [2024-11-06 14:11:16.513780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.344 [2024-11-06 14:11:16.513787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.344 [2024-11-06 14:11:16.526634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.344 [2024-11-06 14:11:16.527325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.344 [2024-11-06 14:11:16.527387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.344 [2024-11-06 14:11:16.527400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.344 [2024-11-06 14:11:16.527655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.344 [2024-11-06 14:11:16.527897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.344 [2024-11-06 14:11:16.527909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.344 [2024-11-06 14:11:16.527917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.344 [2024-11-06 14:11:16.527926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.344 [2024-11-06 14:11:16.540591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.344 [2024-11-06 14:11:16.541200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.344 [2024-11-06 14:11:16.541231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.344 [2024-11-06 14:11:16.541240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.344 [2024-11-06 14:11:16.541463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.344 [2024-11-06 14:11:16.541686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.344 [2024-11-06 14:11:16.541696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.345 [2024-11-06 14:11:16.541704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.345 [2024-11-06 14:11:16.541719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.345 [2024-11-06 14:11:16.554595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.345 [2024-11-06 14:11:16.555319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.345 [2024-11-06 14:11:16.555380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.345 [2024-11-06 14:11:16.555393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.345 [2024-11-06 14:11:16.555649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.345 [2024-11-06 14:11:16.555896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.345 [2024-11-06 14:11:16.555918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.345 [2024-11-06 14:11:16.555926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.345 [2024-11-06 14:11:16.555935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.345 [2024-11-06 14:11:16.568558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.345 [2024-11-06 14:11:16.569064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.345 [2024-11-06 14:11:16.569097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.345 [2024-11-06 14:11:16.569107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.345 [2024-11-06 14:11:16.569332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.345 [2024-11-06 14:11:16.569555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.345 [2024-11-06 14:11:16.569565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.345 [2024-11-06 14:11:16.569572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.345 [2024-11-06 14:11:16.569580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.345 [2024-11-06 14:11:16.582438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.345 [2024-11-06 14:11:16.583154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.345 [2024-11-06 14:11:16.583217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.345 [2024-11-06 14:11:16.583231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.345 [2024-11-06 14:11:16.583487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.345 [2024-11-06 14:11:16.583716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.345 [2024-11-06 14:11:16.583726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.345 [2024-11-06 14:11:16.583734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.345 [2024-11-06 14:11:16.583743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.345 [2024-11-06 14:11:16.596405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.345 [2024-11-06 14:11:16.597045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.345 [2024-11-06 14:11:16.597074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.345 [2024-11-06 14:11:16.597083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.345 [2024-11-06 14:11:16.597308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.345 [2024-11-06 14:11:16.597531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.345 [2024-11-06 14:11:16.597539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.345 [2024-11-06 14:11:16.597548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.345 [2024-11-06 14:11:16.597556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.345 [2024-11-06 14:11:16.610424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.345 [2024-11-06 14:11:16.610999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.345 [2024-11-06 14:11:16.611025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.345 [2024-11-06 14:11:16.611033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.345 [2024-11-06 14:11:16.611256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.345 [2024-11-06 14:11:16.611478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.345 [2024-11-06 14:11:16.611488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.345 [2024-11-06 14:11:16.611495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.345 [2024-11-06 14:11:16.611503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.609 [2024-11-06 14:11:16.624352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.609 [2024-11-06 14:11:16.624966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-11-06 14:11:16.624992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-11-06 14:11:16.625000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.609 [2024-11-06 14:11:16.625223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.609 [2024-11-06 14:11:16.625444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.609 [2024-11-06 14:11:16.625454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.609 [2024-11-06 14:11:16.625462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.609 [2024-11-06 14:11:16.625469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.609 [2024-11-06 14:11:16.638396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.609 [2024-11-06 14:11:16.639088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-11-06 14:11:16.639150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-11-06 14:11:16.639163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.609 [2024-11-06 14:11:16.639425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.609 [2024-11-06 14:11:16.639655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.609 [2024-11-06 14:11:16.639664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.609 [2024-11-06 14:11:16.639673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.609 [2024-11-06 14:11:16.639682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.609 [2024-11-06 14:11:16.652368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.609 [2024-11-06 14:11:16.653079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-11-06 14:11:16.653141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-11-06 14:11:16.653154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.609 [2024-11-06 14:11:16.653409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.609 [2024-11-06 14:11:16.653637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.609 [2024-11-06 14:11:16.653649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.609 [2024-11-06 14:11:16.653657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.609 [2024-11-06 14:11:16.653666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.609 [2024-11-06 14:11:16.666336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.609 [2024-11-06 14:11:16.667003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-11-06 14:11:16.667034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-11-06 14:11:16.667043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.609 [2024-11-06 14:11:16.667267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.609 [2024-11-06 14:11:16.667489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.609 [2024-11-06 14:11:16.667502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.609 [2024-11-06 14:11:16.667510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.609 [2024-11-06 14:11:16.667517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.609 [2024-11-06 14:11:16.680172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.609 [2024-11-06 14:11:16.680740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-11-06 14:11:16.680774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-11-06 14:11:16.680784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.609 [2024-11-06 14:11:16.681009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.609 [2024-11-06 14:11:16.681231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.609 [2024-11-06 14:11:16.681248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.609 [2024-11-06 14:11:16.681257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.609 [2024-11-06 14:11:16.681265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.609 [2024-11-06 14:11:16.694125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.609 [2024-11-06 14:11:16.694688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-11-06 14:11:16.694712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-11-06 14:11:16.694721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.609 [2024-11-06 14:11:16.694955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.609 [2024-11-06 14:11:16.695178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.609 [2024-11-06 14:11:16.695190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.609 [2024-11-06 14:11:16.695198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.609 [2024-11-06 14:11:16.695206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.609 [2024-11-06 14:11:16.708069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.609 [2024-11-06 14:11:16.708634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-11-06 14:11:16.708697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-11-06 14:11:16.708710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.609 [2024-11-06 14:11:16.708983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.609 [2024-11-06 14:11:16.709214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.609 [2024-11-06 14:11:16.709227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.609 [2024-11-06 14:11:16.709236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.609 [2024-11-06 14:11:16.709245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.609 [2024-11-06 14:11:16.722085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.609 [2024-11-06 14:11:16.722687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-11-06 14:11:16.722718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-11-06 14:11:16.722727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.609 [2024-11-06 14:11:16.722959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.609 [2024-11-06 14:11:16.723183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.609 [2024-11-06 14:11:16.723193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.609 [2024-11-06 14:11:16.723201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.609 [2024-11-06 14:11:16.723217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.609 [2024-11-06 14:11:16.736048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.609 [2024-11-06 14:11:16.736726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-11-06 14:11:16.736803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-11-06 14:11:16.736818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.609 [2024-11-06 14:11:16.737076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.609 [2024-11-06 14:11:16.737318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.609 [2024-11-06 14:11:16.737329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.609 [2024-11-06 14:11:16.737339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.609 [2024-11-06 14:11:16.737349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.609 [2024-11-06 14:11:16.750002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.609 [2024-11-06 14:11:16.750603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-11-06 14:11:16.750632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-11-06 14:11:16.750640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.609 [2024-11-06 14:11:16.750873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.609 [2024-11-06 14:11:16.751097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.609 [2024-11-06 14:11:16.751106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.609 [2024-11-06 14:11:16.751113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.609 [2024-11-06 14:11:16.751121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.609 [2024-11-06 14:11:16.763929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.609 [2024-11-06 14:11:16.764516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-11-06 14:11:16.764540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-11-06 14:11:16.764549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.609 [2024-11-06 14:11:16.764903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.609 [2024-11-06 14:11:16.765130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.609 [2024-11-06 14:11:16.765139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.610 [2024-11-06 14:11:16.765147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.610 [2024-11-06 14:11:16.765154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.610 [2024-11-06 14:11:16.777769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.610 [2024-11-06 14:11:16.778365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.610 [2024-11-06 14:11:16.778391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.610 [2024-11-06 14:11:16.778399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.610 [2024-11-06 14:11:16.778621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.610 [2024-11-06 14:11:16.778852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.610 [2024-11-06 14:11:16.778864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.610 [2024-11-06 14:11:16.778872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.610 [2024-11-06 14:11:16.778880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.610 [2024-11-06 14:11:16.791699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.610 [2024-11-06 14:11:16.792273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.610 [2024-11-06 14:11:16.792297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.610 [2024-11-06 14:11:16.792306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.610 [2024-11-06 14:11:16.792529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.610 [2024-11-06 14:11:16.792761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.610 [2024-11-06 14:11:16.792773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.610 [2024-11-06 14:11:16.792781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.610 [2024-11-06 14:11:16.792789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.610 [2024-11-06 14:11:16.805602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.610 [2024-11-06 14:11:16.806174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.610 [2024-11-06 14:11:16.806198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.610 [2024-11-06 14:11:16.806206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.610 [2024-11-06 14:11:16.806428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.610 [2024-11-06 14:11:16.806649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.610 [2024-11-06 14:11:16.806660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.610 [2024-11-06 14:11:16.806667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.610 [2024-11-06 14:11:16.806675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.610 [2024-11-06 14:11:16.819491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.610 [2024-11-06 14:11:16.820104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.610 [2024-11-06 14:11:16.820127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.610 [2024-11-06 14:11:16.820136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.610 [2024-11-06 14:11:16.820365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.610 [2024-11-06 14:11:16.820586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.610 [2024-11-06 14:11:16.820596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.610 [2024-11-06 14:11:16.820604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.610 [2024-11-06 14:11:16.820612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.610 [2024-11-06 14:11:16.833439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.610 [2024-11-06 14:11:16.834024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.610 [2024-11-06 14:11:16.834049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.610 [2024-11-06 14:11:16.834058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.610 [2024-11-06 14:11:16.834280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.610 [2024-11-06 14:11:16.834501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.610 [2024-11-06 14:11:16.834519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.610 [2024-11-06 14:11:16.834527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.610 [2024-11-06 14:11:16.834535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.610 [2024-11-06 14:11:16.847295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.610 [2024-11-06 14:11:16.848032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.610 [2024-11-06 14:11:16.848095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.610 [2024-11-06 14:11:16.848108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.610 [2024-11-06 14:11:16.848365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.610 [2024-11-06 14:11:16.848593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.610 [2024-11-06 14:11:16.848603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.610 [2024-11-06 14:11:16.848612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.610 [2024-11-06 14:11:16.848621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.610 [2024-11-06 14:11:16.861269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.610 [2024-11-06 14:11:16.861872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.610 [2024-11-06 14:11:16.861933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.610 [2024-11-06 14:11:16.861948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.610 [2024-11-06 14:11:16.862205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.610 [2024-11-06 14:11:16.862433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.610 [2024-11-06 14:11:16.862451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.610 [2024-11-06 14:11:16.862460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.610 [2024-11-06 14:11:16.862469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.610 [2024-11-06 14:11:16.875303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.610 [2024-11-06 14:11:16.875835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.610 [2024-11-06 14:11:16.875866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.610 [2024-11-06 14:11:16.875875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.610 [2024-11-06 14:11:16.876099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.610 [2024-11-06 14:11:16.876322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.610 [2024-11-06 14:11:16.876331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.610 [2024-11-06 14:11:16.876340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.610 [2024-11-06 14:11:16.876348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.873 [2024-11-06 14:11:16.888035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.873 [2024-11-06 14:11:16.888540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-11-06 14:11:16.888562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-11-06 14:11:16.888568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.873 [2024-11-06 14:11:16.888722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.873 [2024-11-06 14:11:16.888885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.873 [2024-11-06 14:11:16.888892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.873 [2024-11-06 14:11:16.888899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.873 [2024-11-06 14:11:16.888905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.873 [2024-11-06 14:11:16.900709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.873 [2024-11-06 14:11:16.901271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-11-06 14:11:16.901322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-11-06 14:11:16.901332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.873 [2024-11-06 14:11:16.901513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.873 [2024-11-06 14:11:16.901672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.873 [2024-11-06 14:11:16.901680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.873 [2024-11-06 14:11:16.901686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.873 [2024-11-06 14:11:16.901700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.873 [2024-11-06 14:11:16.913378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.873 [2024-11-06 14:11:16.913877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-11-06 14:11:16.913925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-11-06 14:11:16.913935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.873 [2024-11-06 14:11:16.914116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.873 [2024-11-06 14:11:16.914273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.873 [2024-11-06 14:11:16.914280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.873 [2024-11-06 14:11:16.914286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.873 [2024-11-06 14:11:16.914293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.873 [2024-11-06 14:11:16.926106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.873 [2024-11-06 14:11:16.926712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-11-06 14:11:16.926768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-11-06 14:11:16.926779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.873 [2024-11-06 14:11:16.926956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.873 [2024-11-06 14:11:16.927112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.873 [2024-11-06 14:11:16.927120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.873 [2024-11-06 14:11:16.927126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.873 [2024-11-06 14:11:16.927133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.873 [2024-11-06 14:11:16.938807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.873 [2024-11-06 14:11:16.939214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-11-06 14:11:16.939234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-11-06 14:11:16.939240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.873 [2024-11-06 14:11:16.939393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.873 [2024-11-06 14:11:16.939546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.873 [2024-11-06 14:11:16.939552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.873 [2024-11-06 14:11:16.939558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.873 [2024-11-06 14:11:16.939563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.873 [2024-11-06 14:11:16.951507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.873 [2024-11-06 14:11:16.952122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-11-06 14:11:16.952161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-11-06 14:11:16.952169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.873 [2024-11-06 14:11:16.952341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.873 [2024-11-06 14:11:16.952497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.873 [2024-11-06 14:11:16.952504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.873 [2024-11-06 14:11:16.952509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.873 [2024-11-06 14:11:16.952516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.873 [2024-11-06 14:11:16.964168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.873 [2024-11-06 14:11:16.964775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-11-06 14:11:16.964811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-11-06 14:11:16.964820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.873 [2024-11-06 14:11:16.964991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.873 [2024-11-06 14:11:16.965146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.873 [2024-11-06 14:11:16.965153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.873 [2024-11-06 14:11:16.965159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.873 [2024-11-06 14:11:16.965165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.873 [2024-11-06 14:11:16.976813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.873 [2024-11-06 14:11:16.977406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-11-06 14:11:16.977441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-11-06 14:11:16.977450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.873 [2024-11-06 14:11:16.977620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.873 [2024-11-06 14:11:16.977783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.873 [2024-11-06 14:11:16.977790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.873 [2024-11-06 14:11:16.977796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.873 [2024-11-06 14:11:16.977802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.873 [2024-11-06 14:11:16.989441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.873 [2024-11-06 14:11:16.990083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-11-06 14:11:16.990117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-11-06 14:11:16.990126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.873 [2024-11-06 14:11:16.990299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.873 [2024-11-06 14:11:16.990454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.873 [2024-11-06 14:11:16.990461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.873 [2024-11-06 14:11:16.990467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.873 [2024-11-06 14:11:16.990472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.873 [2024-11-06 14:11:17.002118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.873 [2024-11-06 14:11:17.002702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-11-06 14:11:17.002734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-11-06 14:11:17.002743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.873 [2024-11-06 14:11:17.002919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.873 [2024-11-06 14:11:17.003074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.873 [2024-11-06 14:11:17.003080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.873 [2024-11-06 14:11:17.003086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.873 [2024-11-06 14:11:17.003092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.873 [2024-11-06 14:11:17.014868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.873 [2024-11-06 14:11:17.015445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-11-06 14:11:17.015476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-11-06 14:11:17.015485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.873 [2024-11-06 14:11:17.015653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.873 [2024-11-06 14:11:17.015816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.873 [2024-11-06 14:11:17.015823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.873 [2024-11-06 14:11:17.015829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.873 [2024-11-06 14:11:17.015835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.873 [2024-11-06 14:11:17.027607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.873 [2024-11-06 14:11:17.028179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-11-06 14:11:17.028210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-11-06 14:11:17.028219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.873 [2024-11-06 14:11:17.028386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.874 [2024-11-06 14:11:17.028540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.874 [2024-11-06 14:11:17.028550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.874 [2024-11-06 14:11:17.028555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.874 [2024-11-06 14:11:17.028561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.874 [2024-11-06 14:11:17.040350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.874 [2024-11-06 14:11:17.040868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-11-06 14:11:17.040898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-11-06 14:11:17.040907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.874 [2024-11-06 14:11:17.041075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.874 [2024-11-06 14:11:17.041230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.874 [2024-11-06 14:11:17.041236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.874 [2024-11-06 14:11:17.041242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.874 [2024-11-06 14:11:17.041247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.874 [2024-11-06 14:11:17.053034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.874 [2024-11-06 14:11:17.053624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-11-06 14:11:17.053654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-11-06 14:11:17.053663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.874 [2024-11-06 14:11:17.053836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.874 [2024-11-06 14:11:17.053991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.874 [2024-11-06 14:11:17.053997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.874 [2024-11-06 14:11:17.054003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.874 [2024-11-06 14:11:17.054009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.874 [2024-11-06 14:11:17.065778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.874 [2024-11-06 14:11:17.066354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-11-06 14:11:17.066384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-11-06 14:11:17.066393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.874 [2024-11-06 14:11:17.066559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.874 [2024-11-06 14:11:17.066713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.874 [2024-11-06 14:11:17.066719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.874 [2024-11-06 14:11:17.066725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.874 [2024-11-06 14:11:17.066734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.874 [2024-11-06 14:11:17.078514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.874 [2024-11-06 14:11:17.079125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-11-06 14:11:17.079155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-11-06 14:11:17.079164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.874 [2024-11-06 14:11:17.079330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.874 [2024-11-06 14:11:17.079484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.874 [2024-11-06 14:11:17.079491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.874 [2024-11-06 14:11:17.079496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.874 [2024-11-06 14:11:17.079502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.874 [2024-11-06 14:11:17.091143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.874 [2024-11-06 14:11:17.091723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-11-06 14:11:17.091758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-11-06 14:11:17.091768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.874 [2024-11-06 14:11:17.091937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.874 [2024-11-06 14:11:17.092091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.874 [2024-11-06 14:11:17.092097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.874 [2024-11-06 14:11:17.092103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.874 [2024-11-06 14:11:17.092108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.874 [2024-11-06 14:11:17.104045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.874 [2024-11-06 14:11:17.104555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-11-06 14:11:17.104570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-11-06 14:11:17.104575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.874 [2024-11-06 14:11:17.104726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.874 [2024-11-06 14:11:17.104884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.874 [2024-11-06 14:11:17.104891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.874 [2024-11-06 14:11:17.104896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.874 [2024-11-06 14:11:17.104901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.874 [2024-11-06 14:11:17.116662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.874 [2024-11-06 14:11:17.117223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-11-06 14:11:17.117253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-11-06 14:11:17.117262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.874 [2024-11-06 14:11:17.117429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.874 [2024-11-06 14:11:17.117583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.874 [2024-11-06 14:11:17.117589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.874 [2024-11-06 14:11:17.117595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.874 [2024-11-06 14:11:17.117600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.874 7476.75 IOPS, 29.21 MiB/s [2024-11-06T13:11:17.154Z] [2024-11-06 14:11:17.130517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.874 [2024-11-06 14:11:17.131021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-11-06 14:11:17.131052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-11-06 14:11:17.131060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.874 [2024-11-06 14:11:17.131229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.874 [2024-11-06 14:11:17.131383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.874 [2024-11-06 14:11:17.131390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.874 [2024-11-06 14:11:17.131395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.874 [2024-11-06 14:11:17.131401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.874 [2024-11-06 14:11:17.143190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.874 [2024-11-06 14:11:17.143772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-11-06 14:11:17.143802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-11-06 14:11:17.143811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:30.874 [2024-11-06 14:11:17.143980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:30.874 [2024-11-06 14:11:17.144134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.875 [2024-11-06 14:11:17.144140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.875 [2024-11-06 14:11:17.144145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.875 [2024-11-06 14:11:17.144151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.138 [2024-11-06 14:11:17.155940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.138 [2024-11-06 14:11:17.156519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.138 [2024-11-06 14:11:17.156550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.138 [2024-11-06 14:11:17.156558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.138 [2024-11-06 14:11:17.156732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.138 [2024-11-06 14:11:17.156893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.138 [2024-11-06 14:11:17.156901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.138 [2024-11-06 14:11:17.156907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.138 [2024-11-06 14:11:17.156912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.138 [2024-11-06 14:11:17.168679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.138 [2024-11-06 14:11:17.169259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.138 [2024-11-06 14:11:17.169290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.138 [2024-11-06 14:11:17.169298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.138 [2024-11-06 14:11:17.169464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.138 [2024-11-06 14:11:17.169619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.138 [2024-11-06 14:11:17.169625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.138 [2024-11-06 14:11:17.169631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.138 [2024-11-06 14:11:17.169636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.138 [2024-11-06 14:11:17.181415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.138 [2024-11-06 14:11:17.182005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.138 [2024-11-06 14:11:17.182035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.138 [2024-11-06 14:11:17.182044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.139 [2024-11-06 14:11:17.182211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.139 [2024-11-06 14:11:17.182365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.139 [2024-11-06 14:11:17.182371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.139 [2024-11-06 14:11:17.182377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.139 [2024-11-06 14:11:17.182383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.139 [2024-11-06 14:11:17.194159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.139 [2024-11-06 14:11:17.194734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.139 [2024-11-06 14:11:17.194769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.139 [2024-11-06 14:11:17.194777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.139 [2024-11-06 14:11:17.194944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.139 [2024-11-06 14:11:17.195099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.139 [2024-11-06 14:11:17.195109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.139 [2024-11-06 14:11:17.195114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.139 [2024-11-06 14:11:17.195119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.139 [2024-11-06 14:11:17.206895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.139 [2024-11-06 14:11:17.207484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.139 [2024-11-06 14:11:17.207514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.139 [2024-11-06 14:11:17.207523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.139 [2024-11-06 14:11:17.207689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.139 [2024-11-06 14:11:17.207851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.139 [2024-11-06 14:11:17.207858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.139 [2024-11-06 14:11:17.207863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.139 [2024-11-06 14:11:17.207869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.139 [2024-11-06 14:11:17.219638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.139 [2024-11-06 14:11:17.220166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.139 [2024-11-06 14:11:17.220197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.139 [2024-11-06 14:11:17.220206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.139 [2024-11-06 14:11:17.220372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.139 [2024-11-06 14:11:17.220526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.139 [2024-11-06 14:11:17.220532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.139 [2024-11-06 14:11:17.220538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.139 [2024-11-06 14:11:17.220543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.139 [2024-11-06 14:11:17.232324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.139 [2024-11-06 14:11:17.232949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.139 [2024-11-06 14:11:17.232979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.139 [2024-11-06 14:11:17.232988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.139 [2024-11-06 14:11:17.233154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.139 [2024-11-06 14:11:17.233309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.139 [2024-11-06 14:11:17.233316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.139 [2024-11-06 14:11:17.233322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.139 [2024-11-06 14:11:17.233331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.139 [2024-11-06 14:11:17.244973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.139 [2024-11-06 14:11:17.245466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.139 [2024-11-06 14:11:17.245496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.139 [2024-11-06 14:11:17.245505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.139 [2024-11-06 14:11:17.245672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.139 [2024-11-06 14:11:17.245835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.139 [2024-11-06 14:11:17.245843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.139 [2024-11-06 14:11:17.245849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.139 [2024-11-06 14:11:17.245855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.139 [2024-11-06 14:11:17.257630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.139 [2024-11-06 14:11:17.258194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.139 [2024-11-06 14:11:17.258224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.139 [2024-11-06 14:11:17.258233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.139 [2024-11-06 14:11:17.258400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.139 [2024-11-06 14:11:17.258554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.139 [2024-11-06 14:11:17.258560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.139 [2024-11-06 14:11:17.258566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.139 [2024-11-06 14:11:17.258571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.139 [2024-11-06 14:11:17.270352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.139 [2024-11-06 14:11:17.270862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.139 [2024-11-06 14:11:17.270892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.139 [2024-11-06 14:11:17.270901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.139 [2024-11-06 14:11:17.271070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.139 [2024-11-06 14:11:17.271224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.139 [2024-11-06 14:11:17.271230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.139 [2024-11-06 14:11:17.271236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.139 [2024-11-06 14:11:17.271241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.139 [2024-11-06 14:11:17.283018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.139 [2024-11-06 14:11:17.283602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.139 [2024-11-06 14:11:17.283632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.139 [2024-11-06 14:11:17.283640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.139 [2024-11-06 14:11:17.283814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.139 [2024-11-06 14:11:17.283974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.139 [2024-11-06 14:11:17.283981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.139 [2024-11-06 14:11:17.283987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.139 [2024-11-06 14:11:17.283992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.139 [2024-11-06 14:11:17.295760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.139 [2024-11-06 14:11:17.296339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.139 [2024-11-06 14:11:17.296368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.139 [2024-11-06 14:11:17.296377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.139 [2024-11-06 14:11:17.296543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.140 [2024-11-06 14:11:17.296698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.140 [2024-11-06 14:11:17.296704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.140 [2024-11-06 14:11:17.296710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.140 [2024-11-06 14:11:17.296715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.140 [2024-11-06 14:11:17.308491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.140 [2024-11-06 14:11:17.309085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.140 [2024-11-06 14:11:17.309115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.140 [2024-11-06 14:11:17.309124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.140 [2024-11-06 14:11:17.309293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.140 [2024-11-06 14:11:17.309447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.140 [2024-11-06 14:11:17.309454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.140 [2024-11-06 14:11:17.309459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.140 [2024-11-06 14:11:17.309465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.140 [2024-11-06 14:11:17.321242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.140 [2024-11-06 14:11:17.321731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.140 [2024-11-06 14:11:17.321750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.140 [2024-11-06 14:11:17.321756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.140 [2024-11-06 14:11:17.321911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.140 [2024-11-06 14:11:17.322062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.140 [2024-11-06 14:11:17.322068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.140 [2024-11-06 14:11:17.322073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.140 [2024-11-06 14:11:17.322078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.140 [2024-11-06 14:11:17.333985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.140 [2024-11-06 14:11:17.334571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.140 [2024-11-06 14:11:17.334602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.140 [2024-11-06 14:11:17.334610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.140 [2024-11-06 14:11:17.334783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.140 [2024-11-06 14:11:17.334939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.140 [2024-11-06 14:11:17.334945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.140 [2024-11-06 14:11:17.334951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.140 [2024-11-06 14:11:17.334956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.140 [2024-11-06 14:11:17.346654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.140 [2024-11-06 14:11:17.347206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.140 [2024-11-06 14:11:17.347235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.140 [2024-11-06 14:11:17.347244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.140 [2024-11-06 14:11:17.347413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.140 [2024-11-06 14:11:17.347568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.140 [2024-11-06 14:11:17.347574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.140 [2024-11-06 14:11:17.347580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.140 [2024-11-06 14:11:17.347586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.140 [2024-11-06 14:11:17.359371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.140 [2024-11-06 14:11:17.359961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.140 [2024-11-06 14:11:17.359991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.140 [2024-11-06 14:11:17.359999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.140 [2024-11-06 14:11:17.360168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.140 [2024-11-06 14:11:17.360322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.140 [2024-11-06 14:11:17.360332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.140 [2024-11-06 14:11:17.360337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.140 [2024-11-06 14:11:17.360343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.140 [2024-11-06 14:11:17.372120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.140 [2024-11-06 14:11:17.372713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.140 [2024-11-06 14:11:17.372743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.140 [2024-11-06 14:11:17.372759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.140 [2024-11-06 14:11:17.372927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.140 [2024-11-06 14:11:17.373082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.140 [2024-11-06 14:11:17.373088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.140 [2024-11-06 14:11:17.373093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.140 [2024-11-06 14:11:17.373099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.140 [2024-11-06 14:11:17.384864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.140 [2024-11-06 14:11:17.385440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.140 [2024-11-06 14:11:17.385470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.140 [2024-11-06 14:11:17.385479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.140 [2024-11-06 14:11:17.385648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.140 [2024-11-06 14:11:17.385809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.140 [2024-11-06 14:11:17.385816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.140 [2024-11-06 14:11:17.385822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.140 [2024-11-06 14:11:17.385828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.140 [2024-11-06 14:11:17.397597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.140 [2024-11-06 14:11:17.398159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.140 [2024-11-06 14:11:17.398189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.140 [2024-11-06 14:11:17.398198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.140 [2024-11-06 14:11:17.398364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.140 [2024-11-06 14:11:17.398519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.140 [2024-11-06 14:11:17.398525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.140 [2024-11-06 14:11:17.398530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.140 [2024-11-06 14:11:17.398539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.140 [2024-11-06 14:11:17.410316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.140 [2024-11-06 14:11:17.410875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.140 [2024-11-06 14:11:17.410905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.140 [2024-11-06 14:11:17.410914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.140 [2024-11-06 14:11:17.411084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.140 [2024-11-06 14:11:17.411238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.141 [2024-11-06 14:11:17.411244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.141 [2024-11-06 14:11:17.411250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.141 [2024-11-06 14:11:17.411256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.403 [2024-11-06 14:11:17.423037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.403 [2024-11-06 14:11:17.423615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-11-06 14:11:17.423645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.403 [2024-11-06 14:11:17.423654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.403 [2024-11-06 14:11:17.423825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.403 [2024-11-06 14:11:17.423980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.403 [2024-11-06 14:11:17.423987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.403 [2024-11-06 14:11:17.423992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.403 [2024-11-06 14:11:17.423998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.403 [2024-11-06 14:11:17.435785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.403 [2024-11-06 14:11:17.436347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-11-06 14:11:17.436377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.403 [2024-11-06 14:11:17.436385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.403 [2024-11-06 14:11:17.436552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.403 [2024-11-06 14:11:17.436707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.403 [2024-11-06 14:11:17.436713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.403 [2024-11-06 14:11:17.436718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.403 [2024-11-06 14:11:17.436724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.403 [2024-11-06 14:11:17.448505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.404 [2024-11-06 14:11:17.448974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-11-06 14:11:17.449003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.404 [2024-11-06 14:11:17.449011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.404 [2024-11-06 14:11:17.449178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.404 [2024-11-06 14:11:17.449332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.404 [2024-11-06 14:11:17.449338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.404 [2024-11-06 14:11:17.449344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.404 [2024-11-06 14:11:17.449349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.404 [2024-11-06 14:11:17.461137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.404 [2024-11-06 14:11:17.461714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-11-06 14:11:17.461749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.404 [2024-11-06 14:11:17.461759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.404 [2024-11-06 14:11:17.461928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.404 [2024-11-06 14:11:17.462082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.404 [2024-11-06 14:11:17.462089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.404 [2024-11-06 14:11:17.462097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.404 [2024-11-06 14:11:17.462103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.404 [2024-11-06 14:11:17.473885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.404 [2024-11-06 14:11:17.474453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-11-06 14:11:17.474483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.404 [2024-11-06 14:11:17.474492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.404 [2024-11-06 14:11:17.474658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.404 [2024-11-06 14:11:17.474819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.404 [2024-11-06 14:11:17.474827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.404 [2024-11-06 14:11:17.474832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.404 [2024-11-06 14:11:17.474838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.404 [2024-11-06 14:11:17.486621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.404 [2024-11-06 14:11:17.487177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-11-06 14:11:17.487207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.404 [2024-11-06 14:11:17.487216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.404 [2024-11-06 14:11:17.487386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.404 [2024-11-06 14:11:17.487541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.404 [2024-11-06 14:11:17.487548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.404 [2024-11-06 14:11:17.487555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.404 [2024-11-06 14:11:17.487561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.404 [2024-11-06 14:11:17.499344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.404 [2024-11-06 14:11:17.499905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-11-06 14:11:17.499935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.404 [2024-11-06 14:11:17.499944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.404 [2024-11-06 14:11:17.500114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.404 [2024-11-06 14:11:17.500268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.404 [2024-11-06 14:11:17.500274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.404 [2024-11-06 14:11:17.500280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.404 [2024-11-06 14:11:17.500285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.404 [2024-11-06 14:11:17.512068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.404 [2024-11-06 14:11:17.512548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-11-06 14:11:17.512578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.404 [2024-11-06 14:11:17.512587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.404 [2024-11-06 14:11:17.512761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.404 [2024-11-06 14:11:17.512916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.404 [2024-11-06 14:11:17.512923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.404 [2024-11-06 14:11:17.512929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.404 [2024-11-06 14:11:17.512934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.404 [2024-11-06 14:11:17.524701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.404 [2024-11-06 14:11:17.525240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-11-06 14:11:17.525270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.404 [2024-11-06 14:11:17.525279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.404 [2024-11-06 14:11:17.525448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.404 [2024-11-06 14:11:17.525602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.404 [2024-11-06 14:11:17.525612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.404 [2024-11-06 14:11:17.525618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.404 [2024-11-06 14:11:17.525623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.404 [2024-11-06 14:11:17.537397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.404 [2024-11-06 14:11:17.538025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-11-06 14:11:17.538055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.404 [2024-11-06 14:11:17.538064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.404 [2024-11-06 14:11:17.538230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.404 [2024-11-06 14:11:17.538384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.404 [2024-11-06 14:11:17.538391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.404 [2024-11-06 14:11:17.538396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.404 [2024-11-06 14:11:17.538402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.404 [2024-11-06 14:11:17.550054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.404 [2024-11-06 14:11:17.550629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-11-06 14:11:17.550659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.404 [2024-11-06 14:11:17.550667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.404 [2024-11-06 14:11:17.550845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.404 [2024-11-06 14:11:17.551000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.404 [2024-11-06 14:11:17.551006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.404 [2024-11-06 14:11:17.551011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.404 [2024-11-06 14:11:17.551017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.404 [2024-11-06 14:11:17.562789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.405 [2024-11-06 14:11:17.563373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-11-06 14:11:17.563403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.405 [2024-11-06 14:11:17.563412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.405 [2024-11-06 14:11:17.563579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.405 [2024-11-06 14:11:17.563733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.405 [2024-11-06 14:11:17.563739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.405 [2024-11-06 14:11:17.563752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.405 [2024-11-06 14:11:17.563762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.405 [2024-11-06 14:11:17.575533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.405 [2024-11-06 14:11:17.576088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-11-06 14:11:17.576118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.405 [2024-11-06 14:11:17.576127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.405 [2024-11-06 14:11:17.576293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.405 [2024-11-06 14:11:17.576447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.405 [2024-11-06 14:11:17.576454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.405 [2024-11-06 14:11:17.576459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.405 [2024-11-06 14:11:17.576464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.405 [2024-11-06 14:11:17.588242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.405 [2024-11-06 14:11:17.588781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-11-06 14:11:17.588811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.405 [2024-11-06 14:11:17.588820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.405 [2024-11-06 14:11:17.588988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.405 [2024-11-06 14:11:17.589142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.405 [2024-11-06 14:11:17.589148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.405 [2024-11-06 14:11:17.589155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.405 [2024-11-06 14:11:17.589161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.405 [2024-11-06 14:11:17.600949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.405 [2024-11-06 14:11:17.601526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-11-06 14:11:17.601556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.405 [2024-11-06 14:11:17.601565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.405 [2024-11-06 14:11:17.601732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.405 [2024-11-06 14:11:17.601892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.405 [2024-11-06 14:11:17.601899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.405 [2024-11-06 14:11:17.601905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.405 [2024-11-06 14:11:17.601911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.405 [2024-11-06 14:11:17.613685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.405 [2024-11-06 14:11:17.614323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-11-06 14:11:17.614353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.405 [2024-11-06 14:11:17.614362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.405 [2024-11-06 14:11:17.614529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.405 [2024-11-06 14:11:17.614683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.405 [2024-11-06 14:11:17.614689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.405 [2024-11-06 14:11:17.614694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.405 [2024-11-06 14:11:17.614700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.405 [2024-11-06 14:11:17.626333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.405 [2024-11-06 14:11:17.626852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-11-06 14:11:17.626882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.405 [2024-11-06 14:11:17.626890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.405 [2024-11-06 14:11:17.627059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.405 [2024-11-06 14:11:17.627214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.405 [2024-11-06 14:11:17.627220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.405 [2024-11-06 14:11:17.627225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.405 [2024-11-06 14:11:17.627231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.405 [2024-11-06 14:11:17.639010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.405 [2024-11-06 14:11:17.639496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-11-06 14:11:17.639511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.405 [2024-11-06 14:11:17.639517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.405 [2024-11-06 14:11:17.639668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.405 [2024-11-06 14:11:17.639825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.405 [2024-11-06 14:11:17.639831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.405 [2024-11-06 14:11:17.639837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.405 [2024-11-06 14:11:17.639842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.405 [2024-11-06 14:11:17.651768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.405 [2024-11-06 14:11:17.652344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-11-06 14:11:17.652373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.405 [2024-11-06 14:11:17.652382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.405 [2024-11-06 14:11:17.652552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.405 [2024-11-06 14:11:17.652706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.405 [2024-11-06 14:11:17.652713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.405 [2024-11-06 14:11:17.652718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.405 [2024-11-06 14:11:17.652724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.405 [2024-11-06 14:11:17.664503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.405 [2024-11-06 14:11:17.665116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-11-06 14:11:17.665146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.405 [2024-11-06 14:11:17.665155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.405 [2024-11-06 14:11:17.665322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.405 [2024-11-06 14:11:17.665476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.405 [2024-11-06 14:11:17.665482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.405 [2024-11-06 14:11:17.665487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.405 [2024-11-06 14:11:17.665493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.405 [2024-11-06 14:11:17.677130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.406 [2024-11-06 14:11:17.677706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-11-06 14:11:17.677736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.406 [2024-11-06 14:11:17.677751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.406 [2024-11-06 14:11:17.677920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.406 [2024-11-06 14:11:17.678075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.406 [2024-11-06 14:11:17.678081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.406 [2024-11-06 14:11:17.678087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.406 [2024-11-06 14:11:17.678092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.673 [2024-11-06 14:11:17.689874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.673 [2024-11-06 14:11:17.690448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.673 [2024-11-06 14:11:17.690478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.673 [2024-11-06 14:11:17.690487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.673 [2024-11-06 14:11:17.690654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.673 [2024-11-06 14:11:17.690815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.673 [2024-11-06 14:11:17.690826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.673 [2024-11-06 14:11:17.690832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.673 [2024-11-06 14:11:17.690838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.673 [2024-11-06 14:11:17.702609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.673 [2024-11-06 14:11:17.703086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.673 [2024-11-06 14:11:17.703101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.673 [2024-11-06 14:11:17.703107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.673 [2024-11-06 14:11:17.703258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.673 [2024-11-06 14:11:17.703410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.673 [2024-11-06 14:11:17.703416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.673 [2024-11-06 14:11:17.703420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.673 [2024-11-06 14:11:17.703425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.673 [2024-11-06 14:11:17.715337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.673 [2024-11-06 14:11:17.715834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.673 [2024-11-06 14:11:17.715864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.673 [2024-11-06 14:11:17.715872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.673 [2024-11-06 14:11:17.716041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.673 [2024-11-06 14:11:17.716195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.673 [2024-11-06 14:11:17.716201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.673 [2024-11-06 14:11:17.716207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.673 [2024-11-06 14:11:17.716213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.673 [2024-11-06 14:11:17.728016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.673 [2024-11-06 14:11:17.728592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.673 [2024-11-06 14:11:17.728622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.673 [2024-11-06 14:11:17.728630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.673 [2024-11-06 14:11:17.728804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.673 [2024-11-06 14:11:17.728959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.673 [2024-11-06 14:11:17.728966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.673 [2024-11-06 14:11:17.728971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.673 [2024-11-06 14:11:17.728980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.673 [2024-11-06 14:11:17.740767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.673 [2024-11-06 14:11:17.741344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.673 [2024-11-06 14:11:17.741374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.673 [2024-11-06 14:11:17.741383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.673 [2024-11-06 14:11:17.741550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.673 [2024-11-06 14:11:17.741704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.673 [2024-11-06 14:11:17.741711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.673 [2024-11-06 14:11:17.741716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.673 [2024-11-06 14:11:17.741722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.673 [2024-11-06 14:11:17.753504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.673 [2024-11-06 14:11:17.754068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.673 [2024-11-06 14:11:17.754098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.673 [2024-11-06 14:11:17.754107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.673 [2024-11-06 14:11:17.754273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.673 [2024-11-06 14:11:17.754428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.673 [2024-11-06 14:11:17.754434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.673 [2024-11-06 14:11:17.754440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.673 [2024-11-06 14:11:17.754445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.674 [2024-11-06 14:11:17.766224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.674 [2024-11-06 14:11:17.766802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.674 [2024-11-06 14:11:17.766832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.674 [2024-11-06 14:11:17.766841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.674 [2024-11-06 14:11:17.767010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.674 [2024-11-06 14:11:17.767164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.674 [2024-11-06 14:11:17.767171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.674 [2024-11-06 14:11:17.767176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.674 [2024-11-06 14:11:17.767182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.674 [2024-11-06 14:11:17.778962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.674 [2024-11-06 14:11:17.779471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.674 [2024-11-06 14:11:17.779500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.674 [2024-11-06 14:11:17.779509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.674 [2024-11-06 14:11:17.779676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.674 [2024-11-06 14:11:17.779838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.674 [2024-11-06 14:11:17.779845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.674 [2024-11-06 14:11:17.779851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.674 [2024-11-06 14:11:17.779856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.674 [2024-11-06 14:11:17.791624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.674 [2024-11-06 14:11:17.792185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.674 [2024-11-06 14:11:17.792215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.674 [2024-11-06 14:11:17.792224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.674 [2024-11-06 14:11:17.792391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.674 [2024-11-06 14:11:17.792545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.674 [2024-11-06 14:11:17.792552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.674 [2024-11-06 14:11:17.792557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.674 [2024-11-06 14:11:17.792563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.674 [2024-11-06 14:11:17.804342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.674 [2024-11-06 14:11:17.804855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.674 [2024-11-06 14:11:17.804885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.674 [2024-11-06 14:11:17.804894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.674 [2024-11-06 14:11:17.805063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.674 [2024-11-06 14:11:17.805218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.674 [2024-11-06 14:11:17.805224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.674 [2024-11-06 14:11:17.805230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.674 [2024-11-06 14:11:17.805235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.674 [2024-11-06 14:11:17.817022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.674 [2024-11-06 14:11:17.817509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.674 [2024-11-06 14:11:17.817524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.674 [2024-11-06 14:11:17.817530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.674 [2024-11-06 14:11:17.817685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.674 [2024-11-06 14:11:17.817840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.674 [2024-11-06 14:11:17.817847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.674 [2024-11-06 14:11:17.817852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.674 [2024-11-06 14:11:17.817856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.674 [2024-11-06 14:11:17.829771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.674 [2024-11-06 14:11:17.830332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.674 [2024-11-06 14:11:17.830362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.674 [2024-11-06 14:11:17.830371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.674 [2024-11-06 14:11:17.830538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.674 [2024-11-06 14:11:17.830694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.674 [2024-11-06 14:11:17.830701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.674 [2024-11-06 14:11:17.830707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.674 [2024-11-06 14:11:17.830714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.674 [2024-11-06 14:11:17.842508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.674 [2024-11-06 14:11:17.843126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.674 [2024-11-06 14:11:17.843156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.674 [2024-11-06 14:11:17.843165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.674 [2024-11-06 14:11:17.843332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.674 [2024-11-06 14:11:17.843487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.674 [2024-11-06 14:11:17.843493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.674 [2024-11-06 14:11:17.843499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.674 [2024-11-06 14:11:17.843505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.674 [2024-11-06 14:11:17.855155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.674 [2024-11-06 14:11:17.855611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.674 [2024-11-06 14:11:17.855626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.674 [2024-11-06 14:11:17.855632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.674 [2024-11-06 14:11:17.855788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.674 [2024-11-06 14:11:17.855940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.674 [2024-11-06 14:11:17.855950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.674 [2024-11-06 14:11:17.855955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.674 [2024-11-06 14:11:17.855960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.674 [2024-11-06 14:11:17.867868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.674 [2024-11-06 14:11:17.868278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.674 [2024-11-06 14:11:17.868292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.674 [2024-11-06 14:11:17.868297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.674 [2024-11-06 14:11:17.868449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.674 [2024-11-06 14:11:17.868599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.674 [2024-11-06 14:11:17.868605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.674 [2024-11-06 14:11:17.868610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.674 [2024-11-06 14:11:17.868615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.674 [2024-11-06 14:11:17.880595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.674 [2024-11-06 14:11:17.881088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.675 [2024-11-06 14:11:17.881102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.675 [2024-11-06 14:11:17.881108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.675 [2024-11-06 14:11:17.881259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.675 [2024-11-06 14:11:17.881410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.675 [2024-11-06 14:11:17.881415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.675 [2024-11-06 14:11:17.881420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.675 [2024-11-06 14:11:17.881425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.675 [2024-11-06 14:11:17.893340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.675 [2024-11-06 14:11:17.893950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.675 [2024-11-06 14:11:17.893980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.675 [2024-11-06 14:11:17.893989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.675 [2024-11-06 14:11:17.894155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.675 [2024-11-06 14:11:17.894309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.675 [2024-11-06 14:11:17.894316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.675 [2024-11-06 14:11:17.894322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.675 [2024-11-06 14:11:17.894331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.675 [2024-11-06 14:11:17.905972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.675 [2024-11-06 14:11:17.906554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.675 [2024-11-06 14:11:17.906584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.675 [2024-11-06 14:11:17.906592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.675 [2024-11-06 14:11:17.906765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.675 [2024-11-06 14:11:17.906920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.675 [2024-11-06 14:11:17.906926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.675 [2024-11-06 14:11:17.906932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.675 [2024-11-06 14:11:17.906937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.675 [2024-11-06 14:11:17.918729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.675 [2024-11-06 14:11:17.919330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.675 [2024-11-06 14:11:17.919361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.675 [2024-11-06 14:11:17.919369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.675 [2024-11-06 14:11:17.919536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.675 [2024-11-06 14:11:17.919690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.675 [2024-11-06 14:11:17.919696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.675 [2024-11-06 14:11:17.919702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.675 [2024-11-06 14:11:17.919708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.675 [2024-11-06 14:11:17.931345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.675 [2024-11-06 14:11:17.931855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.675 [2024-11-06 14:11:17.931885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:31.675 [2024-11-06 14:11:17.931894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:31.675 [2024-11-06 14:11:17.932063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:31.675 [2024-11-06 14:11:17.932217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.675 [2024-11-06 14:11:17.932223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.675 [2024-11-06 14:11:17.932229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.675 [2024-11-06 14:11:17.932234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.675 [2024-11-06 14:11:17.944033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.017 [2024-11-06 14:11:17.944627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.017 [2024-11-06 14:11:17.944659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.017 [2024-11-06 14:11:17.944668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.017 [2024-11-06 14:11:17.944841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.017 [2024-11-06 14:11:17.944996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.017 [2024-11-06 14:11:17.945002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.017 [2024-11-06 14:11:17.945008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.017 [2024-11-06 14:11:17.945013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.017 [2024-11-06 14:11:17.956662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.017 [2024-11-06 14:11:17.957063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.017 [2024-11-06 14:11:17.957079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.017 [2024-11-06 14:11:17.957085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.017 [2024-11-06 14:11:17.957236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.017 [2024-11-06 14:11:17.957388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.017 [2024-11-06 14:11:17.957393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.017 [2024-11-06 14:11:17.957398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.017 [2024-11-06 14:11:17.957403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.017 [2024-11-06 14:11:17.969321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.017 [2024-11-06 14:11:17.969790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.017 [2024-11-06 14:11:17.969811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.017 [2024-11-06 14:11:17.969817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.017 [2024-11-06 14:11:17.969973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.017 [2024-11-06 14:11:17.970126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.017 [2024-11-06 14:11:17.970132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.017 [2024-11-06 14:11:17.970137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.017 [2024-11-06 14:11:17.970142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.017 [2024-11-06 14:11:17.982061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.017 [2024-11-06 14:11:17.982444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.017 [2024-11-06 14:11:17.982458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.017 [2024-11-06 14:11:17.982464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.017 [2024-11-06 14:11:17.982618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.017 [2024-11-06 14:11:17.982774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.017 [2024-11-06 14:11:17.982781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.017 [2024-11-06 14:11:17.982786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.017 [2024-11-06 14:11:17.982790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.017 [2024-11-06 14:11:17.994699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.017 [2024-11-06 14:11:17.995055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.017 [2024-11-06 14:11:17.995068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.017 [2024-11-06 14:11:17.995074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.017 [2024-11-06 14:11:17.995225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.017 [2024-11-06 14:11:17.995376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.017 [2024-11-06 14:11:17.995381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.017 [2024-11-06 14:11:17.995386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.017 [2024-11-06 14:11:17.995391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.017 [2024-11-06 14:11:18.007448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.017 [2024-11-06 14:11:18.007917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.017 [2024-11-06 14:11:18.007930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.017 [2024-11-06 14:11:18.007936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.017 [2024-11-06 14:11:18.008086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.017 [2024-11-06 14:11:18.008237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.017 [2024-11-06 14:11:18.008243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.017 [2024-11-06 14:11:18.008248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.017 [2024-11-06 14:11:18.008253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.017 [2024-11-06 14:11:18.020167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.017 [2024-11-06 14:11:18.020623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.017 [2024-11-06 14:11:18.020635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.017 [2024-11-06 14:11:18.020641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.017 [2024-11-06 14:11:18.020798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.017 [2024-11-06 14:11:18.020950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.017 [2024-11-06 14:11:18.020959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.017 [2024-11-06 14:11:18.020964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.017 [2024-11-06 14:11:18.020969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.017 [2024-11-06 14:11:18.032878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.017 [2024-11-06 14:11:18.033341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.017 [2024-11-06 14:11:18.033353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.017 [2024-11-06 14:11:18.033359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.017 [2024-11-06 14:11:18.033509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.017 [2024-11-06 14:11:18.033660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.017 [2024-11-06 14:11:18.033666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.017 [2024-11-06 14:11:18.033671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.017 [2024-11-06 14:11:18.033676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.017 [2024-11-06 14:11:18.045486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.017 [2024-11-06 14:11:18.045810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.017 [2024-11-06 14:11:18.045825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.017 [2024-11-06 14:11:18.045831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.017 [2024-11-06 14:11:18.045982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.018 [2024-11-06 14:11:18.046133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.018 [2024-11-06 14:11:18.046138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.018 [2024-11-06 14:11:18.046144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.018 [2024-11-06 14:11:18.046148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.018 [2024-11-06 14:11:18.058216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.018 [2024-11-06 14:11:18.058691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.018 [2024-11-06 14:11:18.058704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.018 [2024-11-06 14:11:18.058709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.018 [2024-11-06 14:11:18.058866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.018 [2024-11-06 14:11:18.059017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.018 [2024-11-06 14:11:18.059023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.018 [2024-11-06 14:11:18.059028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.018 [2024-11-06 14:11:18.059036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.018 [2024-11-06 14:11:18.070947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.018 [2024-11-06 14:11:18.071329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.018 [2024-11-06 14:11:18.071341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.018 [2024-11-06 14:11:18.071347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.018 [2024-11-06 14:11:18.071497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.018 [2024-11-06 14:11:18.071648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.018 [2024-11-06 14:11:18.071654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.018 [2024-11-06 14:11:18.071659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.018 [2024-11-06 14:11:18.071664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.018 [2024-11-06 14:11:18.083577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.018 [2024-11-06 14:11:18.084094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.018 [2024-11-06 14:11:18.084123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.018 [2024-11-06 14:11:18.084132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.018 [2024-11-06 14:11:18.084299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.018 [2024-11-06 14:11:18.084453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.018 [2024-11-06 14:11:18.084459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.018 [2024-11-06 14:11:18.084465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.018 [2024-11-06 14:11:18.084471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.018 [2024-11-06 14:11:18.096253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.018 [2024-11-06 14:11:18.096739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.018 [2024-11-06 14:11:18.096760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.018 [2024-11-06 14:11:18.096766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.018 [2024-11-06 14:11:18.096918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.018 [2024-11-06 14:11:18.097069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.018 [2024-11-06 14:11:18.097076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.018 [2024-11-06 14:11:18.097082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.018 [2024-11-06 14:11:18.097087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.018 [2024-11-06 14:11:18.109013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.018 [2024-11-06 14:11:18.109573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.018 [2024-11-06 14:11:18.109602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.018 [2024-11-06 14:11:18.109611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.018 [2024-11-06 14:11:18.109784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.018 [2024-11-06 14:11:18.109939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.018 [2024-11-06 14:11:18.109945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.018 [2024-11-06 14:11:18.109951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.018 [2024-11-06 14:11:18.109956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.018 [2024-11-06 14:11:18.121735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.018 [2024-11-06 14:11:18.122271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.018 [2024-11-06 14:11:18.122302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.018 [2024-11-06 14:11:18.122311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.018 [2024-11-06 14:11:18.122477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.018 [2024-11-06 14:11:18.122631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.018 [2024-11-06 14:11:18.122637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.018 [2024-11-06 14:11:18.122643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.018 [2024-11-06 14:11:18.122648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.018 5981.40 IOPS, 23.36 MiB/s [2024-11-06T13:11:18.298Z] [2024-11-06 14:11:18.135431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.018 [2024-11-06 14:11:18.136075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.018 [2024-11-06 14:11:18.136105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.018 [2024-11-06 14:11:18.136114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.018 [2024-11-06 14:11:18.136283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.018 [2024-11-06 14:11:18.136437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.018 [2024-11-06 14:11:18.136444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.018 [2024-11-06 14:11:18.136449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.018 [2024-11-06 14:11:18.136455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.018 [2024-11-06 14:11:18.148101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.018 [2024-11-06 14:11:18.148482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.018 [2024-11-06 14:11:18.148497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.018 [2024-11-06 14:11:18.148503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.018 [2024-11-06 14:11:18.148657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.018 [2024-11-06 14:11:18.148814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.018 [2024-11-06 14:11:18.148821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.018 [2024-11-06 14:11:18.148826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.018 [2024-11-06 14:11:18.148831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.018 [2024-11-06 14:11:18.160753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.018 [2024-11-06 14:11:18.161280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.018 [2024-11-06 14:11:18.161293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.018 [2024-11-06 14:11:18.161299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.018 [2024-11-06 14:11:18.161450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.018 [2024-11-06 14:11:18.161601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.019 [2024-11-06 14:11:18.161607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.019 [2024-11-06 14:11:18.161612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.019 [2024-11-06 14:11:18.161617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.019 [2024-11-06 14:11:18.173411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.019 [2024-11-06 14:11:18.173865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.019 [2024-11-06 14:11:18.173878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.019 [2024-11-06 14:11:18.173883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.019 [2024-11-06 14:11:18.174034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.019 [2024-11-06 14:11:18.174185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.019 [2024-11-06 14:11:18.174191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.019 [2024-11-06 14:11:18.174196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.019 [2024-11-06 14:11:18.174201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.019 [2024-11-06 14:11:18.186123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.019 [2024-11-06 14:11:18.186609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.019 [2024-11-06 14:11:18.186622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.019 [2024-11-06 14:11:18.186627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.019 [2024-11-06 14:11:18.186783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.019 [2024-11-06 14:11:18.186935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.019 [2024-11-06 14:11:18.186944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.019 [2024-11-06 14:11:18.186949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.019 [2024-11-06 14:11:18.186954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.019 [2024-11-06 14:11:18.198863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.019 [2024-11-06 14:11:18.199416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.019 [2024-11-06 14:11:18.199446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.019 [2024-11-06 14:11:18.199454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.019 [2024-11-06 14:11:18.199621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.019 [2024-11-06 14:11:18.199781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.019 [2024-11-06 14:11:18.199788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.019 [2024-11-06 14:11:18.199794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.019 [2024-11-06 14:11:18.199800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.019 [2024-11-06 14:11:18.211578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.019 [2024-11-06 14:11:18.211986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.019 [2024-11-06 14:11:18.212001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.019 [2024-11-06 14:11:18.212007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.019 [2024-11-06 14:11:18.212158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.019 [2024-11-06 14:11:18.212309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.019 [2024-11-06 14:11:18.212314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.019 [2024-11-06 14:11:18.212319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.019 [2024-11-06 14:11:18.212324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.019 [2024-11-06 14:11:18.224239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.019 [2024-11-06 14:11:18.224737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.019 [2024-11-06 14:11:18.224753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.019 [2024-11-06 14:11:18.224759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.019 [2024-11-06 14:11:18.224910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.019 [2024-11-06 14:11:18.225061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.019 [2024-11-06 14:11:18.225066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.019 [2024-11-06 14:11:18.225071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.019 [2024-11-06 14:11:18.225080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.019 [2024-11-06 14:11:18.236862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.019 [2024-11-06 14:11:18.237430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.019 [2024-11-06 14:11:18.237460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.019 [2024-11-06 14:11:18.237469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.019 [2024-11-06 14:11:18.237635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.019 [2024-11-06 14:11:18.237795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.019 [2024-11-06 14:11:18.237802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.019 [2024-11-06 14:11:18.237807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.019 [2024-11-06 14:11:18.237813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.019 [2024-11-06 14:11:18.249603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.019 [2024-11-06 14:11:18.249952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.019 [2024-11-06 14:11:18.249968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.019 [2024-11-06 14:11:18.249974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.019 [2024-11-06 14:11:18.250126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.019 [2024-11-06 14:11:18.250277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.019 [2024-11-06 14:11:18.250283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.019 [2024-11-06 14:11:18.250288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.019 [2024-11-06 14:11:18.250294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.019 [2024-11-06 14:11:18.262227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.019 [2024-11-06 14:11:18.262691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.019 [2024-11-06 14:11:18.262704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.019 [2024-11-06 14:11:18.262710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.019 [2024-11-06 14:11:18.262865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.019 [2024-11-06 14:11:18.263017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.019 [2024-11-06 14:11:18.263023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.019 [2024-11-06 14:11:18.263028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.019 [2024-11-06 14:11:18.263033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.308 [2024-11-06 14:11:18.274955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.308 [2024-11-06 14:11:18.275307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-11-06 14:11:18.275319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-11-06 14:11:18.275325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.308 [2024-11-06 14:11:18.275476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.308 [2024-11-06 14:11:18.275626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.308 [2024-11-06 14:11:18.275632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.308 [2024-11-06 14:11:18.275637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.308 [2024-11-06 14:11:18.275642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.308 [2024-11-06 14:11:18.287704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.308 [2024-11-06 14:11:18.288156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-11-06 14:11:18.288169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-11-06 14:11:18.288174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.308 [2024-11-06 14:11:18.288325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.308 [2024-11-06 14:11:18.288476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.308 [2024-11-06 14:11:18.288481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.308 [2024-11-06 14:11:18.288486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.308 [2024-11-06 14:11:18.288491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.308 [2024-11-06 14:11:18.300406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.308 [2024-11-06 14:11:18.301062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-11-06 14:11:18.301092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-11-06 14:11:18.301101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.308 [2024-11-06 14:11:18.301268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.308 [2024-11-06 14:11:18.301422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.308 [2024-11-06 14:11:18.301428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.308 [2024-11-06 14:11:18.301434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.308 [2024-11-06 14:11:18.301439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.308 [2024-11-06 14:11:18.313080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.308 [2024-11-06 14:11:18.313568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-11-06 14:11:18.313583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-11-06 14:11:18.313589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.308 [2024-11-06 14:11:18.313744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.308 [2024-11-06 14:11:18.313901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.308 [2024-11-06 14:11:18.313907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.308 [2024-11-06 14:11:18.313912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.308 [2024-11-06 14:11:18.313917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.308 [2024-11-06 14:11:18.325832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.308 [2024-11-06 14:11:18.326315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-11-06 14:11:18.326345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-11-06 14:11:18.326354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.308 [2024-11-06 14:11:18.326523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.308 [2024-11-06 14:11:18.326677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.308 [2024-11-06 14:11:18.326684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.308 [2024-11-06 14:11:18.326689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.308 [2024-11-06 14:11:18.326695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.308 [2024-11-06 14:11:18.338505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.308 [2024-11-06 14:11:18.339079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-11-06 14:11:18.339109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-11-06 14:11:18.339118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.308 [2024-11-06 14:11:18.339284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.308 [2024-11-06 14:11:18.339439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.308 [2024-11-06 14:11:18.339445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.308 [2024-11-06 14:11:18.339451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.308 [2024-11-06 14:11:18.339457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.308 [2024-11-06 14:11:18.351241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.308 [2024-11-06 14:11:18.351739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-11-06 14:11:18.351760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-11-06 14:11:18.351766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.308 [2024-11-06 14:11:18.351917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.308 [2024-11-06 14:11:18.352069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.308 [2024-11-06 14:11:18.352078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.308 [2024-11-06 14:11:18.352083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.308 [2024-11-06 14:11:18.352089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.308 [2024-11-06 14:11:18.363887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.308 [2024-11-06 14:11:18.364359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-11-06 14:11:18.364373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-11-06 14:11:18.364379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.308 [2024-11-06 14:11:18.364530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.308 [2024-11-06 14:11:18.364681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.308 [2024-11-06 14:11:18.364687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.308 [2024-11-06 14:11:18.364692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.308 [2024-11-06 14:11:18.364697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.308 [2024-11-06 14:11:18.376620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.308 [2024-11-06 14:11:18.377160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-11-06 14:11:18.377173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-11-06 14:11:18.377179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.309 [2024-11-06 14:11:18.377329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.309 [2024-11-06 14:11:18.377480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.309 [2024-11-06 14:11:18.377486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.309 [2024-11-06 14:11:18.377491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.309 [2024-11-06 14:11:18.377496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.309 [2024-11-06 14:11:18.389271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.309 [2024-11-06 14:11:18.389724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.309 [2024-11-06 14:11:18.389737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.309 [2024-11-06 14:11:18.389742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.309 [2024-11-06 14:11:18.389897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.309 [2024-11-06 14:11:18.390048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.309 [2024-11-06 14:11:18.390055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.309 [2024-11-06 14:11:18.390060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.309 [2024-11-06 14:11:18.390070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.309 [2024-11-06 14:11:18.402000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.309 [2024-11-06 14:11:18.402488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.309 [2024-11-06 14:11:18.402501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.309 [2024-11-06 14:11:18.402507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.309 [2024-11-06 14:11:18.402657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.309 [2024-11-06 14:11:18.402813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.309 [2024-11-06 14:11:18.402820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.309 [2024-11-06 14:11:18.402825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.309 [2024-11-06 14:11:18.402830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.309 [2024-11-06 14:11:18.414613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.309 [2024-11-06 14:11:18.415180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.309 [2024-11-06 14:11:18.415210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.309 [2024-11-06 14:11:18.415219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.309 [2024-11-06 14:11:18.415386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.309 [2024-11-06 14:11:18.415540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.309 [2024-11-06 14:11:18.415547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.309 [2024-11-06 14:11:18.415552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.309 [2024-11-06 14:11:18.415557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.309 [2024-11-06 14:11:18.427359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.309 [2024-11-06 14:11:18.427707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.309 [2024-11-06 14:11:18.427722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.309 [2024-11-06 14:11:18.427728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.309 [2024-11-06 14:11:18.427884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.309 [2024-11-06 14:11:18.428043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.309 [2024-11-06 14:11:18.428049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.309 [2024-11-06 14:11:18.428054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.309 [2024-11-06 14:11:18.428058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.309 [2024-11-06 14:11:18.439994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.309 [2024-11-06 14:11:18.440548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.309 [2024-11-06 14:11:18.440578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.309 [2024-11-06 14:11:18.440587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.309 [2024-11-06 14:11:18.440761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.309 [2024-11-06 14:11:18.440916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.309 [2024-11-06 14:11:18.440923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.309 [2024-11-06 14:11:18.440929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.309 [2024-11-06 14:11:18.440935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.309 [2024-11-06 14:11:18.452732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.309 [2024-11-06 14:11:18.453192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.309 [2024-11-06 14:11:18.453208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.309 [2024-11-06 14:11:18.453213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.309 [2024-11-06 14:11:18.453365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.309 [2024-11-06 14:11:18.453516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.309 [2024-11-06 14:11:18.453521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.309 [2024-11-06 14:11:18.453527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.309 [2024-11-06 14:11:18.453532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.309 [2024-11-06 14:11:18.465473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.309 [2024-11-06 14:11:18.465961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.309 [2024-11-06 14:11:18.465974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.309 [2024-11-06 14:11:18.465980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.309 [2024-11-06 14:11:18.466131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.309 [2024-11-06 14:11:18.466282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.309 [2024-11-06 14:11:18.466288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.309 [2024-11-06 14:11:18.466293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.309 [2024-11-06 14:11:18.466298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.309 [2024-11-06 14:11:18.478088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.309 [2024-11-06 14:11:18.478575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.309 [2024-11-06 14:11:18.478588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.309 [2024-11-06 14:11:18.478593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.309 [2024-11-06 14:11:18.478752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.309 [2024-11-06 14:11:18.478904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.309 [2024-11-06 14:11:18.478910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.309 [2024-11-06 14:11:18.478915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.309 [2024-11-06 14:11:18.478919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.309 [2024-11-06 14:11:18.490702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.309 [2024-11-06 14:11:18.491243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.309 [2024-11-06 14:11:18.491274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.309 [2024-11-06 14:11:18.491283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.309 [2024-11-06 14:11:18.491450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.309 [2024-11-06 14:11:18.491604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.309 [2024-11-06 14:11:18.491611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.309 [2024-11-06 14:11:18.491616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.309 [2024-11-06 14:11:18.491622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.309 [2024-11-06 14:11:18.503429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.309 [2024-11-06 14:11:18.503900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.310 [2024-11-06 14:11:18.503917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.310 [2024-11-06 14:11:18.503923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.310 [2024-11-06 14:11:18.504074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.310 [2024-11-06 14:11:18.504226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.310 [2024-11-06 14:11:18.504233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.310 [2024-11-06 14:11:18.504238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.310 [2024-11-06 14:11:18.504243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.310 [2024-11-06 14:11:18.516177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.310 [2024-11-06 14:11:18.516635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.310 [2024-11-06 14:11:18.516648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.310 [2024-11-06 14:11:18.516654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.310 [2024-11-06 14:11:18.516810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.310 [2024-11-06 14:11:18.516962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.310 [2024-11-06 14:11:18.516971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.310 [2024-11-06 14:11:18.516976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.310 [2024-11-06 14:11:18.516981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.310 [2024-11-06 14:11:18.528916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.310 [2024-11-06 14:11:18.529267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.310 [2024-11-06 14:11:18.529282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.310 [2024-11-06 14:11:18.529287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.310 [2024-11-06 14:11:18.529438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.310 [2024-11-06 14:11:18.529589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.310 [2024-11-06 14:11:18.529594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.310 [2024-11-06 14:11:18.529599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.310 [2024-11-06 14:11:18.529604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.310 [2024-11-06 14:11:18.541542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.310 [2024-11-06 14:11:18.542086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.310 [2024-11-06 14:11:18.542116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.310 [2024-11-06 14:11:18.542125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.310 [2024-11-06 14:11:18.542292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.310 [2024-11-06 14:11:18.542446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.310 [2024-11-06 14:11:18.542453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.310 [2024-11-06 14:11:18.542458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.310 [2024-11-06 14:11:18.542465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.310 [2024-11-06 14:11:18.554263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.310 [2024-11-06 14:11:18.554738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.310 [2024-11-06 14:11:18.554765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.310 [2024-11-06 14:11:18.554771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.310 [2024-11-06 14:11:18.554923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.310 [2024-11-06 14:11:18.555075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.310 [2024-11-06 14:11:18.555081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.310 [2024-11-06 14:11:18.555086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.310 [2024-11-06 14:11:18.555094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.310 [2024-11-06 14:11:18.566885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.310 [2024-11-06 14:11:18.567375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.310 [2024-11-06 14:11:18.567388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.310 [2024-11-06 14:11:18.567394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.310 [2024-11-06 14:11:18.567545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.310 [2024-11-06 14:11:18.567695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.310 [2024-11-06 14:11:18.567701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.310 [2024-11-06 14:11:18.567706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.310 [2024-11-06 14:11:18.567711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.310 [2024-11-06 14:11:18.579504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.310 [2024-11-06 14:11:18.579827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.310 [2024-11-06 14:11:18.579840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.310 [2024-11-06 14:11:18.579845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.310 [2024-11-06 14:11:18.579996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.310 [2024-11-06 14:11:18.580147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.310 [2024-11-06 14:11:18.580153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.310 [2024-11-06 14:11:18.580158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.310 [2024-11-06 14:11:18.580163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.591 [2024-11-06 14:11:18.592240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.591 [2024-11-06 14:11:18.592694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.591 [2024-11-06 14:11:18.592707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.591 [2024-11-06 14:11:18.592712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.591 [2024-11-06 14:11:18.592868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.591 [2024-11-06 14:11:18.593019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.591 [2024-11-06 14:11:18.593025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.591 [2024-11-06 14:11:18.593030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.591 [2024-11-06 14:11:18.593035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.591 [2024-11-06 14:11:18.604965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.591 [2024-11-06 14:11:18.605434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.591 [2024-11-06 14:11:18.605447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.591 [2024-11-06 14:11:18.605453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.591 [2024-11-06 14:11:18.605603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.591 [2024-11-06 14:11:18.605759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.591 [2024-11-06 14:11:18.605766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.591 [2024-11-06 14:11:18.605771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.591 [2024-11-06 14:11:18.605776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.591 [2024-11-06 14:11:18.617700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.591 [2024-11-06 14:11:18.618283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.591 [2024-11-06 14:11:18.618312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.591 [2024-11-06 14:11:18.618321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.591 [2024-11-06 14:11:18.618491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.591 [2024-11-06 14:11:18.618645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.591 [2024-11-06 14:11:18.618651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.591 [2024-11-06 14:11:18.618657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.591 [2024-11-06 14:11:18.618663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.591 [2024-11-06 14:11:18.630454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.591 [2024-11-06 14:11:18.631065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.591 [2024-11-06 14:11:18.631095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.591 [2024-11-06 14:11:18.631103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.591 [2024-11-06 14:11:18.631270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.591 [2024-11-06 14:11:18.631424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.591 [2024-11-06 14:11:18.631431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.591 [2024-11-06 14:11:18.631436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.591 [2024-11-06 14:11:18.631442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.591 [2024-11-06 14:11:18.643079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.591 [2024-11-06 14:11:18.643583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.591 [2024-11-06 14:11:18.643599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.591 [2024-11-06 14:11:18.643605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.591 [2024-11-06 14:11:18.643767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.591 [2024-11-06 14:11:18.643920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.591 [2024-11-06 14:11:18.643925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.591 [2024-11-06 14:11:18.643931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.591 [2024-11-06 14:11:18.643935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.591 [2024-11-06 14:11:18.655724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.591 [2024-11-06 14:11:18.656338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.591 [2024-11-06 14:11:18.656368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.591 [2024-11-06 14:11:18.656377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.591 [2024-11-06 14:11:18.656544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.591 [2024-11-06 14:11:18.656698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.592 [2024-11-06 14:11:18.656704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.592 [2024-11-06 14:11:18.656710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.592 [2024-11-06 14:11:18.656715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.592 [2024-11-06 14:11:18.668372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.592 [2024-11-06 14:11:18.668756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.592 [2024-11-06 14:11:18.668772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.592 [2024-11-06 14:11:18.668778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.592 [2024-11-06 14:11:18.668930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.592 [2024-11-06 14:11:18.669082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.592 [2024-11-06 14:11:18.669087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.592 [2024-11-06 14:11:18.669093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.592 [2024-11-06 14:11:18.669098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.592 [2024-11-06 14:11:18.681033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.592 [2024-11-06 14:11:18.681517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.592 [2024-11-06 14:11:18.681531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.592 [2024-11-06 14:11:18.681537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.592 [2024-11-06 14:11:18.681687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.592 [2024-11-06 14:11:18.681844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.592 [2024-11-06 14:11:18.681854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.592 [2024-11-06 14:11:18.681859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.592 [2024-11-06 14:11:18.681864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.592 [2024-11-06 14:11:18.693647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.592 [2024-11-06 14:11:18.694017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.592 [2024-11-06 14:11:18.694029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.592 [2024-11-06 14:11:18.694035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.592 [2024-11-06 14:11:18.694185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.592 [2024-11-06 14:11:18.694336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.592 [2024-11-06 14:11:18.694342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.592 [2024-11-06 14:11:18.694347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.592 [2024-11-06 14:11:18.694352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.592 [2024-11-06 14:11:18.706282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.592 [2024-11-06 14:11:18.706728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.592 [2024-11-06 14:11:18.706740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.592 [2024-11-06 14:11:18.706750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.592 [2024-11-06 14:11:18.706901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.592 [2024-11-06 14:11:18.707052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.592 [2024-11-06 14:11:18.707058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.592 [2024-11-06 14:11:18.707063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.592 [2024-11-06 14:11:18.707068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.592 [2024-11-06 14:11:18.718984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.592 [2024-11-06 14:11:18.719553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.592 [2024-11-06 14:11:18.719583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.592 [2024-11-06 14:11:18.719591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.592 [2024-11-06 14:11:18.719764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.592 [2024-11-06 14:11:18.719919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.592 [2024-11-06 14:11:18.719926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.592 [2024-11-06 14:11:18.719931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.592 [2024-11-06 14:11:18.719941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.592 [2024-11-06 14:11:18.731720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.592 [2024-11-06 14:11:18.732218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.592 [2024-11-06 14:11:18.732234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.592 [2024-11-06 14:11:18.732239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.592 [2024-11-06 14:11:18.732391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.592 [2024-11-06 14:11:18.732542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.592 [2024-11-06 14:11:18.732548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.592 [2024-11-06 14:11:18.732553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.592 [2024-11-06 14:11:18.732558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.592 [2024-11-06 14:11:18.744340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.592 [2024-11-06 14:11:18.744947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.592 [2024-11-06 14:11:18.744977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.592 [2024-11-06 14:11:18.744986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.592 [2024-11-06 14:11:18.745152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.592 [2024-11-06 14:11:18.745306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.592 [2024-11-06 14:11:18.745313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.592 [2024-11-06 14:11:18.745319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.592 [2024-11-06 14:11:18.745324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.592 [2024-11-06 14:11:18.757043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.592 [2024-11-06 14:11:18.757623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.592 [2024-11-06 14:11:18.757653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.592 [2024-11-06 14:11:18.757662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.592 [2024-11-06 14:11:18.757836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.592 [2024-11-06 14:11:18.757991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.592 [2024-11-06 14:11:18.757997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.592 [2024-11-06 14:11:18.758003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.592 [2024-11-06 14:11:18.758008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.592 [2024-11-06 14:11:18.769774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.592 [2024-11-06 14:11:18.770348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.592 [2024-11-06 14:11:18.770377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.592 [2024-11-06 14:11:18.770386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.592 [2024-11-06 14:11:18.770553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.592 [2024-11-06 14:11:18.770707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.592 [2024-11-06 14:11:18.770714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.592 [2024-11-06 14:11:18.770719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.592 [2024-11-06 14:11:18.770725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.592 [2024-11-06 14:11:18.782502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.592 [2024-11-06 14:11:18.783107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.592 [2024-11-06 14:11:18.783137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.592 [2024-11-06 14:11:18.783146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.592 [2024-11-06 14:11:18.783313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.592 [2024-11-06 14:11:18.783467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.593 [2024-11-06 14:11:18.783474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.593 [2024-11-06 14:11:18.783479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.593 [2024-11-06 14:11:18.783485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.593 [2024-11-06 14:11:18.795260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.593 [2024-11-06 14:11:18.795754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.593 [2024-11-06 14:11:18.795783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.593 [2024-11-06 14:11:18.795792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.593 [2024-11-06 14:11:18.795959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.593 [2024-11-06 14:11:18.796113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.593 [2024-11-06 14:11:18.796119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.593 [2024-11-06 14:11:18.796124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.593 [2024-11-06 14:11:18.796129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2592924 Killed "${NVMF_APP[@]}" "$@" 00:29:32.593 14:11:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:32.593 14:11:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:32.593 14:11:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:32.593 14:11:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:32.593 14:11:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.593 [2024-11-06 14:11:18.807904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.593 [2024-11-06 14:11:18.808500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.593 [2024-11-06 14:11:18.808530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.593 [2024-11-06 14:11:18.808539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.593 [2024-11-06 14:11:18.808706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.593 [2024-11-06 14:11:18.808865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.593 [2024-11-06 14:11:18.808872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.593 [2024-11-06 14:11:18.808878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.593 [2024-11-06 14:11:18.808884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.593 14:11:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2594633 00:29:32.593 14:11:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2594633 00:29:32.593 14:11:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:32.593 14:11:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 2594633 ']' 00:29:32.593 14:11:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.593 14:11:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:32.593 14:11:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.593 14:11:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:32.593 14:11:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.593 [2024-11-06 14:11:18.820526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.593 [2024-11-06 14:11:18.820987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.593 [2024-11-06 14:11:18.821002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.593 [2024-11-06 14:11:18.821008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.593 [2024-11-06 14:11:18.821159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.593 [2024-11-06 14:11:18.821311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.593 [2024-11-06 14:11:18.821317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.593 [2024-11-06 14:11:18.821322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.593 [2024-11-06 14:11:18.821327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.593 [2024-11-06 14:11:18.833248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.593 [2024-11-06 14:11:18.833845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.593 [2024-11-06 14:11:18.833876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.593 [2024-11-06 14:11:18.833888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.593 [2024-11-06 14:11:18.834057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.593 [2024-11-06 14:11:18.834211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.593 [2024-11-06 14:11:18.834218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.593 [2024-11-06 14:11:18.834223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.593 [2024-11-06 14:11:18.834229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.593 [2024-11-06 14:11:18.845878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.593 [2024-11-06 14:11:18.846473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.593 [2024-11-06 14:11:18.846503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.593 [2024-11-06 14:11:18.846512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.593 [2024-11-06 14:11:18.846678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.593 [2024-11-06 14:11:18.846839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.593 [2024-11-06 14:11:18.846847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.593 [2024-11-06 14:11:18.846853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.593 [2024-11-06 14:11:18.846859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.858 [2024-11-06 14:11:18.858504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.858 [2024-11-06 14:11:18.859068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.858 [2024-11-06 14:11:18.859084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.858 [2024-11-06 14:11:18.859090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.858 [2024-11-06 14:11:18.859242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.858 [2024-11-06 14:11:18.859394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.858 [2024-11-06 14:11:18.859400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.858 [2024-11-06 14:11:18.859405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.858 [2024-11-06 14:11:18.859411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.858 [2024-11-06 14:11:18.868806] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:29:32.858 [2024-11-06 14:11:18.868851] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.858 [2024-11-06 14:11:18.871191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.858 [2024-11-06 14:11:18.871636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.858 [2024-11-06 14:11:18.871653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.858 [2024-11-06 14:11:18.871659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.858 [2024-11-06 14:11:18.871815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.858 [2024-11-06 14:11:18.871967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.858 [2024-11-06 14:11:18.871974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.858 [2024-11-06 14:11:18.871979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.858 [2024-11-06 14:11:18.871984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.858 [2024-11-06 14:11:18.883905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.858 [2024-11-06 14:11:18.884394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.858 [2024-11-06 14:11:18.884407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.858 [2024-11-06 14:11:18.884413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.858 [2024-11-06 14:11:18.884563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.858 [2024-11-06 14:11:18.884715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.858 [2024-11-06 14:11:18.884721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.858 [2024-11-06 14:11:18.884727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.858 [2024-11-06 14:11:18.884732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.858 [2024-11-06 14:11:18.896653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.859 [2024-11-06 14:11:18.897079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.859 [2024-11-06 14:11:18.897109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.859 [2024-11-06 14:11:18.897118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.859 [2024-11-06 14:11:18.897285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.859 [2024-11-06 14:11:18.897439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.859 [2024-11-06 14:11:18.897446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.859 [2024-11-06 14:11:18.897451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.859 [2024-11-06 14:11:18.897457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.859 [2024-11-06 14:11:18.909312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.859 [2024-11-06 14:11:18.909826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.859 [2024-11-06 14:11:18.909857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.859 [2024-11-06 14:11:18.909866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.859 [2024-11-06 14:11:18.910039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.859 [2024-11-06 14:11:18.910194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.859 [2024-11-06 14:11:18.910200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.859 [2024-11-06 14:11:18.910206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.859 [2024-11-06 14:11:18.910212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.859 [2024-11-06 14:11:18.922014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.859 [2024-11-06 14:11:18.922523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.859 [2024-11-06 14:11:18.922553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.859 [2024-11-06 14:11:18.922562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.859 [2024-11-06 14:11:18.922732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.859 [2024-11-06 14:11:18.922891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.859 [2024-11-06 14:11:18.922899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.859 [2024-11-06 14:11:18.922905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.859 [2024-11-06 14:11:18.922911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.859 [2024-11-06 14:11:18.934690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.859 [2024-11-06 14:11:18.935266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.859 [2024-11-06 14:11:18.935297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.859 [2024-11-06 14:11:18.935306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.859 [2024-11-06 14:11:18.935473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.859 [2024-11-06 14:11:18.935628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.859 [2024-11-06 14:11:18.935634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.859 [2024-11-06 14:11:18.935640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.859 [2024-11-06 14:11:18.935646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.859 [2024-11-06 14:11:18.947443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.859 [2024-11-06 14:11:18.948048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.859 [2024-11-06 14:11:18.948078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.859 [2024-11-06 14:11:18.948087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.859 [2024-11-06 14:11:18.948255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.859 [2024-11-06 14:11:18.948409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.859 [2024-11-06 14:11:18.948416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.859 [2024-11-06 14:11:18.948425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.859 [2024-11-06 14:11:18.948431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.859 [2024-11-06 14:11:18.958476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:32.859 [2024-11-06 14:11:18.960081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.859 [2024-11-06 14:11:18.960629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.859 [2024-11-06 14:11:18.960659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.859 [2024-11-06 14:11:18.960668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.859 [2024-11-06 14:11:18.960843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.859 [2024-11-06 14:11:18.960998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.859 [2024-11-06 14:11:18.961005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.859 [2024-11-06 14:11:18.961010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.859 [2024-11-06 14:11:18.961016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.859 [2024-11-06 14:11:18.972803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.859 [2024-11-06 14:11:18.973318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.859 [2024-11-06 14:11:18.973333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.859 [2024-11-06 14:11:18.973339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.859 [2024-11-06 14:11:18.973491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.859 [2024-11-06 14:11:18.973643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.859 [2024-11-06 14:11:18.973649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.859 [2024-11-06 14:11:18.973654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.859 [2024-11-06 14:11:18.973659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.859 [2024-11-06 14:11:18.985434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.859 [2024-11-06 14:11:18.986016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.859 [2024-11-06 14:11:18.986046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.859 [2024-11-06 14:11:18.986056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.859 [2024-11-06 14:11:18.986223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.859 [2024-11-06 14:11:18.986377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.859 [2024-11-06 14:11:18.986384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.859 [2024-11-06 14:11:18.986390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.859 [2024-11-06 14:11:18.986401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.859 [2024-11-06 14:11:18.987978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.859 [2024-11-06 14:11:18.987998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.859 [2024-11-06 14:11:18.988005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.859 [2024-11-06 14:11:18.988010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.859 [2024-11-06 14:11:18.988015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.860 [2024-11-06 14:11:18.989047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:32.860 [2024-11-06 14:11:18.989187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.860 [2024-11-06 14:11:18.989190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:32.860 [2024-11-06 14:11:18.998192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.860 [2024-11-06 14:11:18.998816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.860 [2024-11-06 14:11:18.998847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.860 [2024-11-06 14:11:18.998857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.860 [2024-11-06 14:11:18.999028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.860 [2024-11-06 14:11:18.999182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.860 [2024-11-06 14:11:18.999188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.860 [2024-11-06 14:11:18.999195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.860 [2024-11-06 14:11:18.999201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.860 [2024-11-06 14:11:19.010840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.860 [2024-11-06 14:11:19.011356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.860 [2024-11-06 14:11:19.011387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.860 [2024-11-06 14:11:19.011397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.860 [2024-11-06 14:11:19.011564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.860 [2024-11-06 14:11:19.011719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.860 [2024-11-06 14:11:19.011725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.860 [2024-11-06 14:11:19.011731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.860 [2024-11-06 14:11:19.011737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.860 [2024-11-06 14:11:19.023517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.860 [2024-11-06 14:11:19.024106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.860 [2024-11-06 14:11:19.024137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.860 [2024-11-06 14:11:19.024146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.860 [2024-11-06 14:11:19.024321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.860 [2024-11-06 14:11:19.024476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.860 [2024-11-06 14:11:19.024482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.860 [2024-11-06 14:11:19.024488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.860 [2024-11-06 14:11:19.024494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.860 [2024-11-06 14:11:19.036275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.860 [2024-11-06 14:11:19.036853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.860 [2024-11-06 14:11:19.036885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.860 [2024-11-06 14:11:19.036894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.860 [2024-11-06 14:11:19.037063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.860 [2024-11-06 14:11:19.037217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.860 [2024-11-06 14:11:19.037224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.860 [2024-11-06 14:11:19.037229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.860 [2024-11-06 14:11:19.037235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.860 [2024-11-06 14:11:19.049025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.860 [2024-11-06 14:11:19.049614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.860 [2024-11-06 14:11:19.049644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.860 [2024-11-06 14:11:19.049653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.860 [2024-11-06 14:11:19.049826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.860 [2024-11-06 14:11:19.049981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.860 [2024-11-06 14:11:19.049987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.860 [2024-11-06 14:11:19.049993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.860 [2024-11-06 14:11:19.049999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.860 [2024-11-06 14:11:19.061785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.860 [2024-11-06 14:11:19.062372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.860 [2024-11-06 14:11:19.062402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.860 [2024-11-06 14:11:19.062411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.860 [2024-11-06 14:11:19.062578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.860 [2024-11-06 14:11:19.062732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.860 [2024-11-06 14:11:19.062743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.860 [2024-11-06 14:11:19.062755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.860 [2024-11-06 14:11:19.062761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.860 [2024-11-06 14:11:19.074532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.860 [2024-11-06 14:11:19.075095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.860 [2024-11-06 14:11:19.075125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.860 [2024-11-06 14:11:19.075134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.860 [2024-11-06 14:11:19.075301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.860 [2024-11-06 14:11:19.075456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.860 [2024-11-06 14:11:19.075462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.860 [2024-11-06 14:11:19.075468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.860 [2024-11-06 14:11:19.075474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.860 [2024-11-06 14:11:19.087259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.860 [2024-11-06 14:11:19.087853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.860 [2024-11-06 14:11:19.087883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.860 [2024-11-06 14:11:19.087892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.860 [2024-11-06 14:11:19.088059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.860 [2024-11-06 14:11:19.088214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.860 [2024-11-06 14:11:19.088220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.860 [2024-11-06 14:11:19.088226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.860 [2024-11-06 14:11:19.088231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.860 [2024-11-06 14:11:19.100011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.861 [2024-11-06 14:11:19.100644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.861 [2024-11-06 14:11:19.100673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.861 [2024-11-06 14:11:19.100682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.861 [2024-11-06 14:11:19.100856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.861 [2024-11-06 14:11:19.101011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.861 [2024-11-06 14:11:19.101018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.861 [2024-11-06 14:11:19.101023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.861 [2024-11-06 14:11:19.101029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.861 [2024-11-06 14:11:19.112686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.861 [2024-11-06 14:11:19.113272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.861 [2024-11-06 14:11:19.113302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.861 [2024-11-06 14:11:19.113311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.861 [2024-11-06 14:11:19.113478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.861 [2024-11-06 14:11:19.113633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.861 [2024-11-06 14:11:19.113639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.861 [2024-11-06 14:11:19.113645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.861 [2024-11-06 14:11:19.113650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.861 [2024-11-06 14:11:19.125434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.861 [2024-11-06 14:11:19.125786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.861 [2024-11-06 14:11:19.125802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:32.861 [2024-11-06 14:11:19.125808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:32.861 [2024-11-06 14:11:19.125961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:32.861 [2024-11-06 14:11:19.126113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.861 [2024-11-06 14:11:19.126119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.861 [2024-11-06 14:11:19.126124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.861 [2024-11-06 14:11:19.126130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.123 4984.50 IOPS, 19.47 MiB/s [2024-11-06T13:11:19.403Z] [2024-11-06 14:11:19.138191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.123 [2024-11-06 14:11:19.138705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.123 [2024-11-06 14:11:19.138719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.123 [2024-11-06 14:11:19.138724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.123 [2024-11-06 14:11:19.138879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.123 [2024-11-06 14:11:19.139030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.123 [2024-11-06 14:11:19.139036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.123 [2024-11-06 14:11:19.139041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.123 [2024-11-06 14:11:19.139046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.123 [2024-11-06 14:11:19.150823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.123 [2024-11-06 14:11:19.151386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.123 [2024-11-06 14:11:19.151420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.123 [2024-11-06 14:11:19.151429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.123 [2024-11-06 14:11:19.151596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.123 [2024-11-06 14:11:19.151755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.123 [2024-11-06 14:11:19.151763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.123 [2024-11-06 14:11:19.151768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.123 [2024-11-06 14:11:19.151773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.123 [2024-11-06 14:11:19.163548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.123 [2024-11-06 14:11:19.164047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.123 [2024-11-06 14:11:19.164062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.123 [2024-11-06 14:11:19.164068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.123 [2024-11-06 14:11:19.164220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.123 [2024-11-06 14:11:19.164371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.123 [2024-11-06 14:11:19.164376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.123 [2024-11-06 14:11:19.164381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.123 [2024-11-06 14:11:19.164386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.123 [2024-11-06 14:11:19.176294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.123 [2024-11-06 14:11:19.176785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.123 [2024-11-06 14:11:19.176805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.123 [2024-11-06 14:11:19.176812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.123 [2024-11-06 14:11:19.176968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.123 [2024-11-06 14:11:19.177121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.123 [2024-11-06 14:11:19.177127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.123 [2024-11-06 14:11:19.177132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.123 [2024-11-06 14:11:19.177138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.123 [2024-11-06 14:11:19.188904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.123 [2024-11-06 14:11:19.189501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.123 [2024-11-06 14:11:19.189531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.123 [2024-11-06 14:11:19.189540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.123 [2024-11-06 14:11:19.189711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.123 [2024-11-06 14:11:19.189872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.123 [2024-11-06 14:11:19.189879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.123 [2024-11-06 14:11:19.189884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.123 [2024-11-06 14:11:19.189890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.123 [2024-11-06 14:11:19.201516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.123 [2024-11-06 14:11:19.202100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.123 [2024-11-06 14:11:19.202131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.123 [2024-11-06 14:11:19.202140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.123 [2024-11-06 14:11:19.202308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.123 [2024-11-06 14:11:19.202463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.124 [2024-11-06 14:11:19.202469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.124 [2024-11-06 14:11:19.202475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.124 [2024-11-06 14:11:19.202480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.124 [2024-11-06 14:11:19.214256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.124 [2024-11-06 14:11:19.214738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.124 [2024-11-06 14:11:19.214774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.124 [2024-11-06 14:11:19.214782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.124 [2024-11-06 14:11:19.214949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.124 [2024-11-06 14:11:19.215103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.124 [2024-11-06 14:11:19.215110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.124 [2024-11-06 14:11:19.215115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.124 [2024-11-06 14:11:19.215121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.124 [2024-11-06 14:11:19.226890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.124 [2024-11-06 14:11:19.227473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.124 [2024-11-06 14:11:19.227503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.124 [2024-11-06 14:11:19.227512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.124 [2024-11-06 14:11:19.227679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.124 [2024-11-06 14:11:19.227839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.124 [2024-11-06 14:11:19.227850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.124 [2024-11-06 14:11:19.227855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.124 [2024-11-06 14:11:19.227861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.124 [2024-11-06 14:11:19.239630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.124 [2024-11-06 14:11:19.240215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.124 [2024-11-06 14:11:19.240245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.124 [2024-11-06 14:11:19.240254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.124 [2024-11-06 14:11:19.240422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.124 [2024-11-06 14:11:19.240576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.124 [2024-11-06 14:11:19.240583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.124 [2024-11-06 14:11:19.240588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.124 [2024-11-06 14:11:19.240594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.124 [2024-11-06 14:11:19.252376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.124 [2024-11-06 14:11:19.253035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.124 [2024-11-06 14:11:19.253065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.124 [2024-11-06 14:11:19.253074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.124 [2024-11-06 14:11:19.253240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.124 [2024-11-06 14:11:19.253395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.124 [2024-11-06 14:11:19.253401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.124 [2024-11-06 14:11:19.253407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.124 [2024-11-06 14:11:19.253412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.124 [2024-11-06 14:11:19.265055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.124 [2024-11-06 14:11:19.265505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.124 [2024-11-06 14:11:19.265534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.124 [2024-11-06 14:11:19.265543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.124 [2024-11-06 14:11:19.265710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.124 [2024-11-06 14:11:19.265872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.124 [2024-11-06 14:11:19.265880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.124 [2024-11-06 14:11:19.265885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.124 [2024-11-06 14:11:19.265895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.124 [2024-11-06 14:11:19.277669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.124 [2024-11-06 14:11:19.278279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.124 [2024-11-06 14:11:19.278309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.124 [2024-11-06 14:11:19.278318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.124 [2024-11-06 14:11:19.278485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.124 [2024-11-06 14:11:19.278640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.124 [2024-11-06 14:11:19.278647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.124 [2024-11-06 14:11:19.278652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.124 [2024-11-06 14:11:19.278658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.124 [2024-11-06 14:11:19.290286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.124 [2024-11-06 14:11:19.290959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.124 [2024-11-06 14:11:19.290989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.124 [2024-11-06 14:11:19.290998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.124 [2024-11-06 14:11:19.291165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.124 [2024-11-06 14:11:19.291320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.124 [2024-11-06 14:11:19.291326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.124 [2024-11-06 14:11:19.291332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.124 [2024-11-06 14:11:19.291338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.124 [2024-11-06 14:11:19.302970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.124 [2024-11-06 14:11:19.303340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.124 [2024-11-06 14:11:19.303355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.124 [2024-11-06 14:11:19.303360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.124 [2024-11-06 14:11:19.303512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.124 [2024-11-06 14:11:19.303662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.124 [2024-11-06 14:11:19.303668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.124 [2024-11-06 14:11:19.303673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.124 [2024-11-06 14:11:19.303678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.124 [2024-11-06 14:11:19.315589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.124 [2024-11-06 14:11:19.316155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.124 [2024-11-06 14:11:19.316188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.124 [2024-11-06 14:11:19.316197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.124 [2024-11-06 14:11:19.316364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.124 [2024-11-06 14:11:19.316518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.124 [2024-11-06 14:11:19.316524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.125 [2024-11-06 14:11:19.316530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.125 [2024-11-06 14:11:19.316535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.125 [2024-11-06 14:11:19.328310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.125 [2024-11-06 14:11:19.328825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.125 [2024-11-06 14:11:19.328855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.125 [2024-11-06 14:11:19.328864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.125 [2024-11-06 14:11:19.329031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.125 [2024-11-06 14:11:19.329186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.125 [2024-11-06 14:11:19.329193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.125 [2024-11-06 14:11:19.329198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.125 [2024-11-06 14:11:19.329204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.125 [2024-11-06 14:11:19.340995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.125 [2024-11-06 14:11:19.341498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.125 [2024-11-06 14:11:19.341513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.125 [2024-11-06 14:11:19.341519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.125 [2024-11-06 14:11:19.341670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.125 [2024-11-06 14:11:19.341826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.125 [2024-11-06 14:11:19.341832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.125 [2024-11-06 14:11:19.341837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.125 [2024-11-06 14:11:19.341841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.125 [2024-11-06 14:11:19.353622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.125 [2024-11-06 14:11:19.354026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.125 [2024-11-06 14:11:19.354057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.125 [2024-11-06 14:11:19.354066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.125 [2024-11-06 14:11:19.354239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.125 [2024-11-06 14:11:19.354393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.125 [2024-11-06 14:11:19.354399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.125 [2024-11-06 14:11:19.354405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.125 [2024-11-06 14:11:19.354410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.125 [2024-11-06 14:11:19.366350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.125 [2024-11-06 14:11:19.366725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.125 [2024-11-06 14:11:19.366741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.125 [2024-11-06 14:11:19.366751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.125 [2024-11-06 14:11:19.366903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.125 [2024-11-06 14:11:19.367054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.125 [2024-11-06 14:11:19.367060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.125 [2024-11-06 14:11:19.367065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.125 [2024-11-06 14:11:19.367070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.125 [2024-11-06 14:11:19.378992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.125 [2024-11-06 14:11:19.379342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.125 [2024-11-06 14:11:19.379355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.125 [2024-11-06 14:11:19.379360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.125 [2024-11-06 14:11:19.379511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.125 [2024-11-06 14:11:19.379662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.125 [2024-11-06 14:11:19.379667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.125 [2024-11-06 14:11:19.379672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.125 [2024-11-06 14:11:19.379677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.125 [2024-11-06 14:11:19.391742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.125 [2024-11-06 14:11:19.392369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.125 [2024-11-06 14:11:19.392400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.125 [2024-11-06 14:11:19.392408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.125 [2024-11-06 14:11:19.392576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.125 [2024-11-06 14:11:19.392730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.125 [2024-11-06 14:11:19.392741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.125 [2024-11-06 14:11:19.392753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.125 [2024-11-06 14:11:19.392759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.387 [2024-11-06 14:11:19.404395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.387 [2024-11-06 14:11:19.404868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.387 [2024-11-06 14:11:19.404884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.387 [2024-11-06 14:11:19.404889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.387 [2024-11-06 14:11:19.405040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.387 [2024-11-06 14:11:19.405192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.387 [2024-11-06 14:11:19.405198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.387 [2024-11-06 14:11:19.405203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.387 [2024-11-06 14:11:19.405208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.387 [2024-11-06 14:11:19.417121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.387 [2024-11-06 14:11:19.417597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.387 [2024-11-06 14:11:19.417610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.387 [2024-11-06 14:11:19.417615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.387 [2024-11-06 14:11:19.417770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.387 [2024-11-06 14:11:19.417922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.387 [2024-11-06 14:11:19.417928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.387 [2024-11-06 14:11:19.417933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.387 [2024-11-06 14:11:19.417938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.387 [2024-11-06 14:11:19.429846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.387 [2024-11-06 14:11:19.430336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.387 [2024-11-06 14:11:19.430349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.387 [2024-11-06 14:11:19.430354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.388 [2024-11-06 14:11:19.430505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.388 [2024-11-06 14:11:19.430656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.388 [2024-11-06 14:11:19.430661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.388 [2024-11-06 14:11:19.430666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.388 [2024-11-06 14:11:19.430674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.388 [2024-11-06 14:11:19.442591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.388 [2024-11-06 14:11:19.443185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.388 [2024-11-06 14:11:19.443215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.388 [2024-11-06 14:11:19.443224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.388 [2024-11-06 14:11:19.443391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.388 [2024-11-06 14:11:19.443546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.388 [2024-11-06 14:11:19.443552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.388 [2024-11-06 14:11:19.443558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.388 [2024-11-06 14:11:19.443563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.388 [2024-11-06 14:11:19.455256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.388 [2024-11-06 14:11:19.455854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.388 [2024-11-06 14:11:19.455885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.388 [2024-11-06 14:11:19.455894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.388 [2024-11-06 14:11:19.456061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.388 [2024-11-06 14:11:19.456216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.388 [2024-11-06 14:11:19.456222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.388 [2024-11-06 14:11:19.456228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.388 [2024-11-06 14:11:19.456233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.388 [2024-11-06 14:11:19.467882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.388 [2024-11-06 14:11:19.468481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.388 [2024-11-06 14:11:19.468510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.388 [2024-11-06 14:11:19.468519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.388 [2024-11-06 14:11:19.468686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.388 [2024-11-06 14:11:19.468846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.388 [2024-11-06 14:11:19.468853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.388 [2024-11-06 14:11:19.468858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.388 [2024-11-06 14:11:19.468864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.388 [2024-11-06 14:11:19.480642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.388 [2024-11-06 14:11:19.481254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.388 [2024-11-06 14:11:19.481288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.388 [2024-11-06 14:11:19.481297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.388 [2024-11-06 14:11:19.481465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.388 [2024-11-06 14:11:19.481619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.388 [2024-11-06 14:11:19.481626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.388 [2024-11-06 14:11:19.481633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.388 [2024-11-06 14:11:19.481639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.388 [2024-11-06 14:11:19.493281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.388 [2024-11-06 14:11:19.493853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.388 [2024-11-06 14:11:19.493883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.388 [2024-11-06 14:11:19.493893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.388 [2024-11-06 14:11:19.494063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.388 [2024-11-06 14:11:19.494217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.388 [2024-11-06 14:11:19.494224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.388 [2024-11-06 14:11:19.494230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.388 [2024-11-06 14:11:19.494235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.388 [2024-11-06 14:11:19.506024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.388 [2024-11-06 14:11:19.506630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.388 [2024-11-06 14:11:19.506660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.388 [2024-11-06 14:11:19.506669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.388 [2024-11-06 14:11:19.506843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.388 [2024-11-06 14:11:19.506998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.388 [2024-11-06 14:11:19.507004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.388 [2024-11-06 14:11:19.507010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.388 [2024-11-06 14:11:19.507016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.388 [2024-11-06 14:11:19.518644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.388 [2024-11-06 14:11:19.519030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.388 [2024-11-06 14:11:19.519046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.388 [2024-11-06 14:11:19.519052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.388 [2024-11-06 14:11:19.519207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.388 [2024-11-06 14:11:19.519358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.388 [2024-11-06 14:11:19.519363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.388 [2024-11-06 14:11:19.519368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.388 [2024-11-06 14:11:19.519373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.388 [2024-11-06 14:11:19.531295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.388 [2024-11-06 14:11:19.531753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.388 [2024-11-06 14:11:19.531766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.388 [2024-11-06 14:11:19.531772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.388 [2024-11-06 14:11:19.531923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.388 [2024-11-06 14:11:19.532073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.388 [2024-11-06 14:11:19.532079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.388 [2024-11-06 14:11:19.532084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.388 [2024-11-06 14:11:19.532089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.388 [2024-11-06 14:11:19.544012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.388 [2024-11-06 14:11:19.544465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.388 [2024-11-06 14:11:19.544477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.388 [2024-11-06 14:11:19.544482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.388 [2024-11-06 14:11:19.544633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.388 [2024-11-06 14:11:19.544789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.388 [2024-11-06 14:11:19.544796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.388 [2024-11-06 14:11:19.544801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.388 [2024-11-06 14:11:19.544806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.388 [2024-11-06 14:11:19.556729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.388 [2024-11-06 14:11:19.557255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.388 [2024-11-06 14:11:19.557268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.389 [2024-11-06 14:11:19.557273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.389 [2024-11-06 14:11:19.557424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.389 [2024-11-06 14:11:19.557575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.389 [2024-11-06 14:11:19.557584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.389 [2024-11-06 14:11:19.557589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.389 [2024-11-06 14:11:19.557594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.389 [2024-11-06 14:11:19.569364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.389 [2024-11-06 14:11:19.569975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.389 [2024-11-06 14:11:19.570006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.389 [2024-11-06 14:11:19.570015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.389 [2024-11-06 14:11:19.570182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.389 [2024-11-06 14:11:19.570336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.389 [2024-11-06 14:11:19.570343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.389 [2024-11-06 14:11:19.570348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.389 [2024-11-06 14:11:19.570354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.389 [2024-11-06 14:11:19.582000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.389 [2024-11-06 14:11:19.582587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.389 [2024-11-06 14:11:19.582617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.389 [2024-11-06 14:11:19.582627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.389 [2024-11-06 14:11:19.582801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.389 [2024-11-06 14:11:19.582957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.389 [2024-11-06 14:11:19.582964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.389 [2024-11-06 14:11:19.582969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.389 [2024-11-06 14:11:19.582975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.389 [2024-11-06 14:11:19.594754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.389 [2024-11-06 14:11:19.595379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.389 [2024-11-06 14:11:19.595409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.389 [2024-11-06 14:11:19.595418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.389 [2024-11-06 14:11:19.595585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.389 [2024-11-06 14:11:19.595739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.389 [2024-11-06 14:11:19.595752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.389 [2024-11-06 14:11:19.595758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.389 [2024-11-06 14:11:19.595768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.389 [2024-11-06 14:11:19.607402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.389 [2024-11-06 14:11:19.607647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.389 [2024-11-06 14:11:19.607669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.389 [2024-11-06 14:11:19.607675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.389 [2024-11-06 14:11:19.607838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.389 [2024-11-06 14:11:19.607991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.389 [2024-11-06 14:11:19.607997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.389 [2024-11-06 14:11:19.608003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.389 [2024-11-06 14:11:19.608008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.389 [2024-11-06 14:11:19.620072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.389 [2024-11-06 14:11:19.620302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.389 [2024-11-06 14:11:19.620321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.389 [2024-11-06 14:11:19.620328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.389 [2024-11-06 14:11:19.620484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.389 [2024-11-06 14:11:19.620637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.389 [2024-11-06 14:11:19.620644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.389 [2024-11-06 14:11:19.620650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.389 [2024-11-06 14:11:19.620655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.389 [2024-11-06 14:11:19.632716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.389 [2024-11-06 14:11:19.633193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.389 [2024-11-06 14:11:19.633207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.389 [2024-11-06 14:11:19.633213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.389 [2024-11-06 14:11:19.633364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.389 [2024-11-06 14:11:19.633515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.389 [2024-11-06 14:11:19.633520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.389 [2024-11-06 14:11:19.633525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.389 [2024-11-06 14:11:19.633530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.389 [2024-11-06 14:11:19.645443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.389 [2024-11-06 14:11:19.645860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.389 [2024-11-06 14:11:19.645895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.389 [2024-11-06 14:11:19.645905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.389 [2024-11-06 14:11:19.646072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.389 [2024-11-06 14:11:19.646235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.389 [2024-11-06 14:11:19.646242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.389 [2024-11-06 14:11:19.646248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.389 [2024-11-06 14:11:19.646254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.389 [2024-11-06 14:11:19.658184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.389 [2024-11-06 14:11:19.658691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.389 [2024-11-06 14:11:19.658706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.389 [2024-11-06 14:11:19.658712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.389 [2024-11-06 14:11:19.658868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.389 [2024-11-06 14:11:19.659020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.389 [2024-11-06 14:11:19.659026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.389 [2024-11-06 14:11:19.659031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.389 [2024-11-06 14:11:19.659036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.389 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:33.389 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:33.389 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.389 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:33.389 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.650 [2024-11-06 14:11:19.670809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.650 [2024-11-06 14:11:19.671352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.650 [2024-11-06 14:11:19.671383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.650 [2024-11-06 14:11:19.671392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.650 [2024-11-06 14:11:19.671559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.650 [2024-11-06 14:11:19.671714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.650 [2024-11-06 14:11:19.671720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.650 [2024-11-06 14:11:19.671726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.650 [2024-11-06 14:11:19.671732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.650 [2024-11-06 14:11:19.683517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.650 [2024-11-06 14:11:19.684165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.650 [2024-11-06 14:11:19.684196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.650 [2024-11-06 14:11:19.684205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.650 [2024-11-06 14:11:19.684372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.650 [2024-11-06 14:11:19.684527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.650 [2024-11-06 14:11:19.684534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.651 [2024-11-06 14:11:19.684539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.651 [2024-11-06 14:11:19.684545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.651 [2024-11-06 14:11:19.696190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.651 [2024-11-06 14:11:19.696776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.651 [2024-11-06 14:11:19.696807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.651 [2024-11-06 14:11:19.696816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.651 [2024-11-06 14:11:19.696984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.651 [2024-11-06 14:11:19.697138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.651 [2024-11-06 14:11:19.697145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.651 [2024-11-06 14:11:19.697150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.651 [2024-11-06 14:11:19.697156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.651 [2024-11-06 14:11:19.706047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.651 [2024-11-06 14:11:19.708945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.651 [2024-11-06 14:11:19.709377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.651 [2024-11-06 14:11:19.709407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.651 [2024-11-06 14:11:19.709416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.651 [2024-11-06 14:11:19.709584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.651 [2024-11-06 14:11:19.709738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.651 [2024-11-06 14:11:19.709751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.651 [2024-11-06 14:11:19.709757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.651 [2024-11-06 14:11:19.709766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.651 [2024-11-06 14:11:19.721684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.651 [2024-11-06 14:11:19.722222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.651 [2024-11-06 14:11:19.722237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.651 [2024-11-06 14:11:19.722243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.651 [2024-11-06 14:11:19.722394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.651 [2024-11-06 14:11:19.722546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.651 [2024-11-06 14:11:19.722552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.651 [2024-11-06 14:11:19.722557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.651 [2024-11-06 14:11:19.722562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.651 [2024-11-06 14:11:19.734332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.651 [2024-11-06 14:11:19.734944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.651 [2024-11-06 14:11:19.734974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.651 [2024-11-06 14:11:19.734983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.651 [2024-11-06 14:11:19.735151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.651 [2024-11-06 14:11:19.735305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.651 [2024-11-06 14:11:19.735312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.651 [2024-11-06 14:11:19.735318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.651 [2024-11-06 14:11:19.735325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.651 Malloc0 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.651 [2024-11-06 14:11:19.746976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.651 [2024-11-06 14:11:19.747481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.651 [2024-11-06 14:11:19.747496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.651 [2024-11-06 14:11:19.747502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.651 [2024-11-06 14:11:19.747658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.651 [2024-11-06 14:11:19.747814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.651 [2024-11-06 14:11:19.747821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.651 [2024-11-06 14:11:19.747826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.651 [2024-11-06 14:11:19.747831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.651 [2024-11-06 14:11:19.759610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.651 [2024-11-06 14:11:19.760275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.651 [2024-11-06 14:11:19.760306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abf280 with addr=10.0.0.2, port=4420 00:29:33.651 [2024-11-06 14:11:19.760315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf280 is same with the state(6) to be set 00:29:33.651 [2024-11-06 14:11:19.760483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abf280 (9): Bad file descriptor 00:29:33.651 [2024-11-06 14:11:19.760639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.651 [2024-11-06 14:11:19.760645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.651 [2024-11-06 14:11:19.760651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.651 [2024-11-06 14:11:19.760656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.651 [2024-11-06 14:11:19.770615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.651 [2024-11-06 14:11:19.772301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.651 14:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2593386 00:29:33.651 [2024-11-06 14:11:19.799667] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:35.293 4887.43 IOPS, 19.09 MiB/s [2024-11-06T13:11:22.513Z] 5896.00 IOPS, 23.03 MiB/s [2024-11-06T13:11:23.453Z] 6672.44 IOPS, 26.06 MiB/s [2024-11-06T13:11:24.395Z] 7311.00 IOPS, 28.56 MiB/s [2024-11-06T13:11:25.335Z] 7819.82 IOPS, 30.55 MiB/s [2024-11-06T13:11:26.275Z] 8243.25 IOPS, 32.20 MiB/s [2024-11-06T13:11:27.215Z] 8611.62 IOPS, 33.64 MiB/s [2024-11-06T13:11:28.598Z] 8920.93 IOPS, 34.85 MiB/s 00:29:42.318 Latency(us) 00:29:42.318 [2024-11-06T13:11:28.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.318 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.318 Verification LBA range: start 0x0 length 0x4000 00:29:42.318 Nvme1n1 : 15.01 9185.51 35.88 12999.71 0.00 5750.19 549.55 13981.01 00:29:42.318 [2024-11-06T13:11:28.598Z] =================================================================================================================== 00:29:42.318 [2024-11-06T13:11:28.598Z] Total : 9185.51 35.88 12999.71 0.00 5750.19 549.55 13981.01 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:42.318 rmmod nvme_tcp 00:29:42.318 rmmod nvme_fabrics 00:29:42.318 rmmod nvme_keyring 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2594633 ']' 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2594633 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 2594633 ']' 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 2594633 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2594633 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2594633' 00:29:42.318 killing process with pid 2594633 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 2594633 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 2594633 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.318 14:11:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:44.859 00:29:44.859 real 0m28.250s 00:29:44.859 user 1m2.958s 00:29:44.859 sys 0m7.718s 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:44.859 ************************************ 00:29:44.859 END TEST nvmf_bdevperf 00:29:44.859 ************************************ 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.859 ************************************ 00:29:44.859 START TEST nvmf_target_disconnect 00:29:44.859 ************************************ 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:44.859 * Looking for test storage... 00:29:44.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:44.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.859 --rc genhtml_branch_coverage=1 00:29:44.859 --rc genhtml_function_coverage=1 00:29:44.859 --rc genhtml_legend=1 00:29:44.859 --rc geninfo_all_blocks=1 00:29:44.859 --rc geninfo_unexecuted_blocks=1 00:29:44.859 00:29:44.859 ' 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:44.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.859 --rc genhtml_branch_coverage=1 00:29:44.859 --rc genhtml_function_coverage=1 00:29:44.859 --rc genhtml_legend=1 00:29:44.859 --rc geninfo_all_blocks=1 00:29:44.859 --rc geninfo_unexecuted_blocks=1 00:29:44.859 00:29:44.859 ' 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:44.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.859 --rc genhtml_branch_coverage=1 00:29:44.859 --rc genhtml_function_coverage=1 00:29:44.859 --rc genhtml_legend=1 00:29:44.859 --rc geninfo_all_blocks=1 00:29:44.859 --rc geninfo_unexecuted_blocks=1 00:29:44.859 00:29:44.859 ' 00:29:44.859 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:44.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.859 --rc genhtml_branch_coverage=1 00:29:44.859 --rc genhtml_function_coverage=1 00:29:44.859 --rc genhtml_legend=1 00:29:44.859 --rc geninfo_all_blocks=1 00:29:44.859 --rc geninfo_unexecuted_blocks=1 00:29:44.859 00:29:44.859 ' 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:44.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:44.860 14:11:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:53.001 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:53.001 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:53.001 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:53.002 Found net devices under 0000:31:00.0: cvl_0_0 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:53.002 Found net devices under 0000:31:00.1: cvl_0_1 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:53.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:29:53.002 00:29:53.002 --- 10.0.0.2 ping statistics --- 00:29:53.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.002 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:29:53.002 00:29:53.002 --- 10.0.0.1 ping statistics --- 00:29:53.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.002 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:53.002 ************************************ 00:29:53.002 START TEST nvmf_target_disconnect_tc1 00:29:53.002 ************************************ 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:53.002 [2024-11-06 14:11:38.685863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.002 [2024-11-06 14:11:38.685959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a7f60 with addr=10.0.0.2, port=4420 00:29:53.002 [2024-11-06 14:11:38.686001] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:53.002 [2024-11-06 14:11:38.686019] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:53.002 [2024-11-06 14:11:38.686028] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:53.002 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:53.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:53.002 Initializing NVMe Controllers 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:53.002 00:29:53.002 real 0m0.145s 00:29:53.002 user 0m0.063s 00:29:53.002 sys 0m0.083s 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:53.002 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:53.002 ************************************ 00:29:53.002 END TEST nvmf_target_disconnect_tc1 00:29:53.002 ************************************ 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:53.003 ************************************ 00:29:53.003 START TEST nvmf_target_disconnect_tc2 00:29:53.003 ************************************ 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2600718 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2600718 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2600718 ']' 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:53.003 14:11:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.003 [2024-11-06 14:11:38.846686] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:29:53.003 [2024-11-06 14:11:38.846751] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.003 [2024-11-06 14:11:38.947877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:53.003 [2024-11-06 14:11:39.000373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.003 [2024-11-06 14:11:39.000422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.003 [2024-11-06 14:11:39.000431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.003 [2024-11-06 14:11:39.000438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.003 [2024-11-06 14:11:39.000444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.003 [2024-11-06 14:11:39.002958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:53.003 [2024-11-06 14:11:39.003123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:53.003 [2024-11-06 14:11:39.003282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:53.003 [2024-11-06 14:11:39.003282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.581 Malloc0 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.581 [2024-11-06 14:11:39.761339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.581 [2024-11-06 14:11:39.801742] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2600808 00:29:53.581 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:53.582 14:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:56.155 14:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2600718 00:29:56.155 14:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:56.155 Read completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Read completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Read completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Read completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Read completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Write completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Write completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Read completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Read completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Read completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Read completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Write completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Read completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Read completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Write completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Write completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Write completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Write completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Write completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Read completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Read completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Write completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Write completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Read completed with error (sct=0, sc=8) 00:29:56.155 starting I/O failed 00:29:56.155 Write completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Write completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Write completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Write completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Write completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 [2024-11-06 14:11:41.842270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Write completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Write completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Write completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Write completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Write completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Write completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Write completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Write completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Write completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Write completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Write completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 Read completed with error (sct=0, sc=8) 00:29:56.156 starting I/O failed 00:29:56.156 [2024-11-06 14:11:41.842638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.156 [2024-11-06 14:11:41.843232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.843301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.843572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.843589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.844062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.844119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.844369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.844385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.844701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.844714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.845131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.845187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.845559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.845574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.846035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.846094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.846507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.846522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.847000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.847056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.847406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.847420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.847635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.847648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.848012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.848025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.848383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.848395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.848721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.848733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.849097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.849110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.849478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.849490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.849823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.849836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.850148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.850160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.850488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.850500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.850783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.850795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.851041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.851053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.851391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.851403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.851595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-11-06 14:11:41.851607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-11-06 14:11:41.851987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.852001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.852360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.852373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.852616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.852629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.852860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.852873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.853191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.853204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.853564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.853577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.853922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.853935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.854296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.854308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.854406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.854420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.854658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.854670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.854802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.854814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.855246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.855259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.855523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.855536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.855785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.855798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.856023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.856034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.856314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.856325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.856545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.856561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.856901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.856914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.857086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.857098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.857208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.857222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.857336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.857348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.857689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.857701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.858031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.858046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.858367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.858379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.858724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.858736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.858949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.858962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.859269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.859281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.859502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.859513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.859721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.859733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.860084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.860097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.860387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.860397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.860727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.860737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.861181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.861194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.861422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.861432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.861776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.861787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.862009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.862019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.862396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.862405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.862738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-11-06 14:11:41.862767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-11-06 14:11:41.862879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.862891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.863244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.863254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.863630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.863642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.863855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.863865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.864110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.864121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.864476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.864487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.864808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.864819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.864957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.864967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.865300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.865310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.865632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.865642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.865942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.865953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.866339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.866349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.866663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.866674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.867060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.867072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.867412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.867422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.867769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.867781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.868135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.868147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.868471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.868482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.868806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.868820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.869233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.869244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.869410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.869421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.869717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.869727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.870169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.870180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.870514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.870524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.870779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.870791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.871053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.871063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.871382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.871393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.871602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.871614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.871981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.871992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.872307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.872317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.872645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.872656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.872930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.872941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.873245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.873256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.873611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.873622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.874017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.874033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.874336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.874349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.874718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.874732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.874843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.874856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.158 qpair failed and we were unable to recover it. 00:29:56.158 [2024-11-06 14:11:41.875123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.158 [2024-11-06 14:11:41.875137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.875457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.875471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.875802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.875817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.876034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.876047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.876376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.876390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.876768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.876783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.877121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.877136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.877472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.877487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.877825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.877840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.878250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.878264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.878603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.878617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.878983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.878997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.879321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.879334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.879648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.879669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.880028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.880043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.880377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.880391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.880827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.880841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.881211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.881225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.881565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.881579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.881902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.881916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.882222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.882240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.882551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.882565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.882766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.882781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.882975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.882989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.883312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.883327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.883602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.883617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.883954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.883970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.884299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.884314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.884636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.884649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.885005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.885020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.885343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.885357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.885662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.885676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.885898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.885916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.886194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.886212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.886443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.886460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.886795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.886814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.887068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.887087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.887340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.887358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.159 [2024-11-06 14:11:41.887696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.159 [2024-11-06 14:11:41.887714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.159 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.887959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.887980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.888364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.888382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.888738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.888767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.889040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.889059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.889395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.889413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.889750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.889769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.890001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.890023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.890409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.890430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.890772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.890791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.891199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.891217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.891550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.891568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.891818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.891837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.892226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.892244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.892569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.892587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.892803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.892823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.893176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.893194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.893544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.893561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.893804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.893822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.894176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.894196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.894532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.894550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.894798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.894818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.895111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.895134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.895474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.895491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.895704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.895722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.896072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.896090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.896420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.896438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.896773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.896791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.897138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.897155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.897486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.897506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.897841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.897867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.898248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.898272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.898641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.160 [2024-11-06 14:11:41.898666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.160 qpair failed and we were unable to recover it. 00:29:56.160 [2024-11-06 14:11:41.899008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.899034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.899376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.899402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.899757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.899785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.900154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.900180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.900560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.900586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.900943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.900969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.901314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.901340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.901711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.901736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.902129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.902156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.902431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.902456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.902716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.902742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.903000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.903025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.903395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.903427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.903765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.903792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.904039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.904065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.904418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.904443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.904804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.904832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.905201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.905226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.905596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.905622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.906008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.906034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.906422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.906447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.906823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.906850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.907231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.907257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.907466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.907494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.907872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.907902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.908257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.908286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.908647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.908676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.909054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.909086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.909430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.909459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.909822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.909860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.910097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.910128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.910477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.910506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.910870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.910899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.911265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.911293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.911646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.911674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.912053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.912081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.912445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.912473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.912824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.912853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.161 qpair failed and we were unable to recover it. 00:29:56.161 [2024-11-06 14:11:41.913174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.161 [2024-11-06 14:11:41.913202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.913554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.913582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.913948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.913979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.914340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.914368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.914647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.914675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.914871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.914900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.915243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.915272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.915632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.915661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.916031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.916061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.916406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.916434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.916801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.916832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.917212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.917240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.917620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.917650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.918048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.918078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.918383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.918411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.918771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.918801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.919170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.919198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.919513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.919548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.919915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.919945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.920266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.920303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.920696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.920724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.921117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.921147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.921407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.921435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.921694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.921723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.922127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.922157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.922517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.922545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.922896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.922925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.923289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.923318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.923692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.923721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.924110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.924139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.924482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.924511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.924866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.924901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.925264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.925292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.925663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.925692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.926088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.926118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.926485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.926514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.926880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.926911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.927270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.927298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.162 [2024-11-06 14:11:41.927666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.162 [2024-11-06 14:11:41.927694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.162 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.928057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.928088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.928436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.928464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.928828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.928858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.929120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.929150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.929519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.929546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.929943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.929973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.930332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.930361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.930611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.930639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.931066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.931096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.931440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.931468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.931889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.931920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.932288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.932316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.932677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.932705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.932975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.933004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.933384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.933412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.933776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.933807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.934037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.934068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.934422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.934451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.934814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.934844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.935211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.935240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.935591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.935619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.936047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.936076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.936469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.936498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.936777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.936808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.937184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.937212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.937574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.937603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.937974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.938003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.938372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.938400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.938771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.938803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.939166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.939195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.939537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.939565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.939947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.939977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.940337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.940372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.940631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.940660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.941023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.941052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.941418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.941446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.941801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.941830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.163 qpair failed and we were unable to recover it. 00:29:56.163 [2024-11-06 14:11:41.942193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.163 [2024-11-06 14:11:41.942222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.942589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.942618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.942981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.943010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.943368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.943398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.943765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.943795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.944171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.944200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.944559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.944587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.944931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.944960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.945327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.945355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.945715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.945764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.946132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.946161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.946528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.946555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.946919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.946949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.947314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.947342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.947576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.947608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.947987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.948017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.948379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.948407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.948777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.948806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.949032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.949064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.949435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.949463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.949835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.949865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.950048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.950076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.950306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.950338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.950694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.950723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.951117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.951147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.951510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.951539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.951913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.951943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.952277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.952307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.952586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.952614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.952976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.953007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.953363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.953392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.164 [2024-11-06 14:11:41.953753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.164 [2024-11-06 14:11:41.953782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.164 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.954146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.954174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.954544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.954571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.954930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.954959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.955315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.955349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.955704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.955732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.956115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.956144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.956510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.956538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.956907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.956936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.957300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.957329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.957689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.957719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.958064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.958093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.958450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.958478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.958841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.958870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.959110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.959141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.959541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.959571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.959947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.959976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.960376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.960403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.960665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.960694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.960850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.960882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.961163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.961192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.961560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.961587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.961976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.962006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.962373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.962401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.962667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.962695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.963071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.963101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.963466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.963493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.963740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.963785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.964178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.964207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.964574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.964604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.964981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.965011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.965171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.965202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.965552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.165 [2024-11-06 14:11:41.965581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.165 qpair failed and we were unable to recover it. 00:29:56.165 [2024-11-06 14:11:41.965943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.965974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.966361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.966391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.966760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.966791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.967045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.967077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.967441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.967470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.967826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.967857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.968217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.968245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.968682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.968710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.968925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.968954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.969321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.969349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.969719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.969757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.970126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.970162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.970542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.970569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.970910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.970939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.971321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.971349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.971711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.971740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.972019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.972047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.972409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.972439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.972805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.972834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.973108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.973136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.973483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.973512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.973647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.973678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.974091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.974121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.974477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.974506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.974794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.974824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.975073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.975105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.975456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.975484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.975869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.975900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.976260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.976288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.976664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.976692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.977048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.977078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.977413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.977441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.977803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.977835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.978203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.978231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.978631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.166 [2024-11-06 14:11:41.978660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.166 qpair failed and we were unable to recover it. 00:29:56.166 [2024-11-06 14:11:41.979022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.979051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.979406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.979433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.979872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.979901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.980280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.980310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.980664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.980693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.981052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.981082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.981434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.981464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.981738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.981775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.981997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.982028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.982334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.982362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.982736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.982776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.983172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.983201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.983562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.983590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.983955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.983986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.984335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.984364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.984706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.984734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.985107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.985144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.985506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.985534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.985888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.985918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.986283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.986313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.986673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.986701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.987027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.987056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.987395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.987423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.987800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.987830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.988093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.988121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.988473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.988502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.988867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.988896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.989272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.989300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.989667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.989694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.990063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.990093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.990494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.990523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.990866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.990895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.991178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.991206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.991580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.991609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.991850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.991882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.992256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.992285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.992647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.992675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.993049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.993078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.167 qpair failed and we were unable to recover it. 00:29:56.167 [2024-11-06 14:11:41.993436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.167 [2024-11-06 14:11:41.993464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:41.993830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:41.993859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:41.994228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:41.994256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:41.994617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:41.994647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:41.994885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:41.994918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:41.995227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:41.995261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:41.995611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:41.995639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:41.995988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:41.996018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:41.996381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:41.996409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:41.996787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:41.996819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:41.997186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:41.997213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:41.997582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:41.997610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:41.997771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:41.997804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:41.998072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:41.998101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:41.998487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:41.998514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:41.998794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:41.998823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:41.999189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:41.999217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:41.999593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:41.999622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:41.999989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.000018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.000365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.000394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.000767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.000797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.001148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.001176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.001431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.001459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.001713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.001755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.002121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.002150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.002461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.002489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.002858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.002887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.003270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.003297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.003666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.003695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.004052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.004082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.004441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.004468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.004696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.004727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.005137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.005166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.005544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.005573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.005932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.005962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.006323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.006352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.006712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.006741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.006993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.007022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.168 [2024-11-06 14:11:42.007378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.168 [2024-11-06 14:11:42.007407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.168 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.007780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.007809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.008192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.008220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.008584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.008613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.008970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.009000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.009345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.009373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.009743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.009782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.010153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.010187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.010548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.010577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.010933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.010964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.011327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.011355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.011618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.011646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.012029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.012058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.012275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.012306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.012663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.012691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.012975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.013008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.013359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.013389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.013764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.013794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.014153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.014180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.014580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.014608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.014924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.014954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.015336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.015364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.015726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.015764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.016165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.016193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.016555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.016583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.016945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.016974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.017334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.017362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.017613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.017644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.018038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.018067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.018398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.018426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.018660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.018689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.019083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.019113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.019479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.019508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.019874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.169 [2024-11-06 14:11:42.019904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.169 qpair failed and we were unable to recover it. 00:29:56.169 [2024-11-06 14:11:42.020291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.020322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.020685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.020716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.021123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.021152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.021492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.021527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.021874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.021903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.022270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.022299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.022659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.022688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.022939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.022971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.023229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.023258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.023503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.023535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.023923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.023954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.024331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.024362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.024632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.024660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.025073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.025109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.025449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.025479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.025857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.025886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.026255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.026283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.026647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.026675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.027063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.027094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.027338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.027369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.027570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.027598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.027980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.028009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.028360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.028388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.028613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.028644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.029022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.029051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.029409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.029440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.029778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.029808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.030145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.030173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.030557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.030585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.030930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.030960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.031209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.031240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.031580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.031611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.031858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.031888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.032244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.032272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.032643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.032672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.033063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.033092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.033442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.033471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.033821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.170 [2024-11-06 14:11:42.033851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.170 qpair failed and we were unable to recover it. 00:29:56.170 [2024-11-06 14:11:42.034209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.034236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.034585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.034613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.034979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.035009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.035246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.035277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.035689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.035718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.036091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.036120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.036482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.036512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.036876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.036906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.037259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.037288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.037671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.037700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.038068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.038098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.038320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.038351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.038701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.038730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.039113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.039142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.039365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.039397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.039791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.039833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.040065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.040096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.040448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.040476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.040845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.040877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.041243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.041272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.041641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.041670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.041898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.041930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.042278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.042307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.042662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.042690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.043047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.043079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.043436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.043465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.043732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.043783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.044134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.044163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.044385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.044415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.044769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.044800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.045053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.045084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.045366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.045394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.045680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.045710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.046138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.046168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.046397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.046428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.046777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.046806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.047128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.047156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.047501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.171 [2024-11-06 14:11:42.047529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.171 qpair failed and we were unable to recover it. 00:29:56.171 [2024-11-06 14:11:42.047891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.047921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.048259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.048287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.048653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.048681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.049051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.049081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.049450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.049478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.049835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.049866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.050296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.050326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.050678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.050706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.051056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.051085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.051446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.051476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.051842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.051872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.052248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.052277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.052647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.052675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.053022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.053053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.053404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.053432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.053792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.053821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.054194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.054222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.054580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.054615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.054987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.055016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.055358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.055387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.055755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.055784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.056143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.056171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.056544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.056572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.056919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.056949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.057311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.057339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.057585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.057614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.057935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.057964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.058324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.058353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.058721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.058757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.059097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.059124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.059487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.059516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.059866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.059897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.060234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.060261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.060500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.060528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.060888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.060917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.061277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.061305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.061677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.061707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.062066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.062096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.172 qpair failed and we were unable to recover it. 00:29:56.172 [2024-11-06 14:11:42.062513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.172 [2024-11-06 14:11:42.062541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.062917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.062946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.063339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.063366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.063713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.063741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.064117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.064146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.064485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.064514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.064885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.064916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.065286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.065315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.065681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.065709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.066108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.066137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.066497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.066524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.066895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.066925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.067151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.067179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.067544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.067572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.067905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.067942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.068189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.068216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.068561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.068590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.068977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.069009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.069377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.069403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.069755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.069790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.070186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.070214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.070579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.070606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.070998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.071028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.071401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.071430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.071694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.071722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.072113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.072142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.072403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.072430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.072778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.072807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.073137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.073165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.073546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.073575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.073930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.073960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.074312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.074340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.074669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.074697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.075076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.075106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.075442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.075470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.075718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.075765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.076146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.076176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.076534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.076563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.173 [2024-11-06 14:11:42.077005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.173 [2024-11-06 14:11:42.077034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.173 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.077384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.077413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.077772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.077802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.078164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.078193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.078463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.078491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.078866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.078894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.079237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.079266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.079678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.079706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.080079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.080109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.080466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.080494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.080852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.080881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.081239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.081267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.081640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.081668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.082011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.082041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.082399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.082427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.082803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.082834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.083192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.083220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.083600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.083628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.083982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.084011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.084415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.084444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.084566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.084598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.085011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.085049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.085391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.085421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.085804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.085834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.086199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.086228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.086593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.086621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.086877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.086907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.087283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.087311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.087669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.087697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.088065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.088094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.088466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.088494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.088871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.088900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.089274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.089304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.089672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.174 [2024-11-06 14:11:42.089700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.174 qpair failed and we were unable to recover it. 00:29:56.174 [2024-11-06 14:11:42.090069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.090098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.090360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.090389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.090739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.090777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.091193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.091221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.091549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.091578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.091954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.091985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.092354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.092382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.092771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.092801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.093171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.093200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.093563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.093592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.093935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.093965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.094333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.094362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.094725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.094766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.095164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.095194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.095536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.095564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.095922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.095952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.096314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.096344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.096705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.096733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.096904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.096938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.097331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.097361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.099282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.099348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.099665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.099700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.100112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.100144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.100515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.100544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.100805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.100841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.101195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.101224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.101583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.101612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.101969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.102008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.102369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.102398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.102768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.102801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.103157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.103186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.103468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.103497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.103881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.103912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.104286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.104316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.104724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.104765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.105039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.105068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.105449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.105478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.175 qpair failed and we were unable to recover it. 00:29:56.175 [2024-11-06 14:11:42.105855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.175 [2024-11-06 14:11:42.105886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.107718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.107793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.108234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.108266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.108517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.108546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.108899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.108931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.109182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.109216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.109601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.109629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.109890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.109920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.110268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.110298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.110672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.110701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.110988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.111019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.111384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.111413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.111810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.111842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.112133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.112164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.112517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.112547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.112896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.112926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.113290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.113319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.115212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.115273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.115720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.115775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.116114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.116144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.116498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.116527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.116897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.116929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.117186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.117216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.117584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.117612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.118004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.118037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.118424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.118454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.118823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.118854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.119201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.119231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.119496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.119528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.119900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.119931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.120300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.120339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.120707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.120736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.121179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.121209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.121603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.121632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.122010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.122041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.122401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.122430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.122788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.122818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.123211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.176 [2024-11-06 14:11:42.123241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.176 qpair failed and we were unable to recover it. 00:29:56.176 [2024-11-06 14:11:42.123591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.123619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.123977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.124012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.124374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.124402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.124765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.124799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.125159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.125188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.125553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.125583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.125918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.125948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.126170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.126202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.126595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.126623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.126981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.127012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.127357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.127387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.127682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.127709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.127935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.127969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.128335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.128363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.128727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.128765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.129110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.129142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.129500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.129528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.129794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.129824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.130173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.130202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.130567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.130597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.130970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.130999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.131357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.131387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.131791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.131823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.132142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.132170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.132407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.132439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.132790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.132819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.133215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.133244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.133477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.133505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.133871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.133902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.134261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.134292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.134728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.134767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.135009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.135042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.135424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.135460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.135767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.135796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.136160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.136188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.136546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.136577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.136944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.136974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.137342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.137371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.177 [2024-11-06 14:11:42.137613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.177 [2024-11-06 14:11:42.137646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.177 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.138025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.138056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.138411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.138439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.138803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.138835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.139207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.139237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.139599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.139632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.139999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.140030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.140389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.140419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.140717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.140756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.141135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.141165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.141547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.141578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.141914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.141944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.142313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.142343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.142701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.142731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.143000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.143029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.143369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.143398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.143764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.143796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.144124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.144153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.144506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.144536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.144897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.144927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.145169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.145201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.145626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.145656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.146043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.146073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.146426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.146456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.146863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.146895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.147267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.147297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.147692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.147720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.147985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.148017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.148353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.148384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.148763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.148793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.149163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.149193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.149567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.149595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.149978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.150007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.150368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.150395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.178 [2024-11-06 14:11:42.150800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.178 [2024-11-06 14:11:42.150836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.178 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.151069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.151101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.151348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.151379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.151742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.151783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.152029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.152062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.152419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.152448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.152694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.152723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.153142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.153171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.153530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.153557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.153945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.153976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.154338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.154367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.154739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.154776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.155150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.155179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.155533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.155565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.155937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.155967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.156325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.156355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.156717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.156754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.157127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.157155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.157499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.157527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.157865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.157895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.158260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.158290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.158653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.158681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.159054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.159084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.159455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.159483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.159849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.159880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.160259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.160287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.160645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.160673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Write completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Write completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Write completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Write completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Write completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Write completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Write completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Read completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Write completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 Write completed with error (sct=0, sc=8) 00:29:56.179 starting I/O failed 00:29:56.179 [2024-11-06 14:11:42.161491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:56.179 [2024-11-06 14:11:42.162079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.162198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.179 qpair failed and we were unable to recover it. 00:29:56.179 [2024-11-06 14:11:42.162637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.179 [2024-11-06 14:11:42.162676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.163186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.163294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.163730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.163812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.164223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.164254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.164630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.164659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.165103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.165210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.165656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.165696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.166127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.166160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.166507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.166537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.166735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.166777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.167044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.167073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.167355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.167384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.167667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.167697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.168024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.168064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.168437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.168467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.168827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.168858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.169203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.169233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.169495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.169524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.169888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.169917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.170177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.170222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.170600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.170631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.171068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.171099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.171443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.171473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.171846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.171876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.172248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.172276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.172631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.172661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.173014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.173046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.173402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.173431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.173795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.173825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.174170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.174200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.174592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.174622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.174979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.175010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.175269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.175297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.175687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.175716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.176112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.176142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.176505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.176533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.176902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.176933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.177290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.177320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.177568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.180 [2024-11-06 14:11:42.177601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.180 qpair failed and we were unable to recover it. 00:29:56.180 [2024-11-06 14:11:42.177967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.177996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.178260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.178288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.178665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.178694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.179034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.179063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.179403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.179433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.179803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.179835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.180217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.180247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.180613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.180649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.180998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.181029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.181397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.181426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.181824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.181854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.182231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.182260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.182604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.182632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.182859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.182888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.183267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.183296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.183652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.183682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.184053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.184081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.184451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.184479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.184851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.184882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.185245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.185273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.185648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.185677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.186058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.186089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.186395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.186423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.186696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.186725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.187090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.187119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.187482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.187512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.187887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.187918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.188168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.188196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.188565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.188595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.188836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.188867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.189285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.189314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.189671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.189701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.190062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.190092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.190456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.190486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.190866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.190904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.191147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.191179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.191588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.191617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.191977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.192007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.181 [2024-11-06 14:11:42.192269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.181 [2024-11-06 14:11:42.192298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.181 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.192617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.192645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.193007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.193039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.193392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.193420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.193687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.193716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.194115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.194146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.194504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.194534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.194906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.194936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.195301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.195332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.195693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.195723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.196098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.196130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.196484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.196513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.196862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.196892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.197252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.197281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.197634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.197662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.198017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.198047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.198399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.198429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.198796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.198825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.199287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.199316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.199667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.199695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.200083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.200116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.200460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.200489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.200774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.200805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.201191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.201220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.201576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.201605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.201980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.202009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.202375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.202404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.202775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.202804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.203169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.203199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.203560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.203589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.203871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.203900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.204181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.204209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.204558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.204588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.204939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.204970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.205223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.205251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.205587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.205615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.205977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.206006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.206363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.206399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.206758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.206789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.182 qpair failed and we were unable to recover it. 00:29:56.182 [2024-11-06 14:11:42.207173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.182 [2024-11-06 14:11:42.207202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.207564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.207592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.207932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.207963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.208342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.208371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.208721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.208761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.209061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.209089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.209345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.209373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.209714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.209744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.210147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.210176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.210521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.210550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.210912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.210942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.211302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.211330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.211691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.211722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.212097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.212126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.212490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.212518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.212887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.212916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.213277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.213306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.213708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.213738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.214121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.214152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.214577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.214608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.214952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.214984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.215348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.215377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.215631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.215660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.216016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.216046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.216405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.216437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.216790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.216826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.217208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.217237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.217595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.217624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.217974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.218007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.218372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.218402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.218659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.218690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.219056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.219086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.219451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.219480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.219852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.183 [2024-11-06 14:11:42.219883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.183 qpair failed and we were unable to recover it. 00:29:56.183 [2024-11-06 14:11:42.220230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.220260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.220533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.220563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.220989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.221020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.221361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.221392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.221727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.221765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.222173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.222203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.222566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.222596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.222889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.222920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.223329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.223358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.223607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.223636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.224003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.224033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.224399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.224429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.224790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.224820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.225187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.225216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.225558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.225586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.225965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.225996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.226352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.226381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.226556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.226590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.226886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.226922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.227297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.227327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.227685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.227714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.228091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.228123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.228456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.228485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.228856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.228887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.229248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.229276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.229673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.229706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.229985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.230014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.230387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.230416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.230668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.230697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.231083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.231113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.231470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.231499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.231942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.231973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.232348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.232377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.232657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.232685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.233030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.233060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.233425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.233455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.233814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.233843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.234199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.234227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.235367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.184 [2024-11-06 14:11:42.235418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.184 qpair failed and we were unable to recover it. 00:29:56.184 [2024-11-06 14:11:42.235795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.235832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.237832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.237902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.238345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.238383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.238771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.238802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.239184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.239213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.239573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.239603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.239942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.239974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.240335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.240365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.240798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.240829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.241176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.241206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.241538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.241567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.241931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.241962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.242330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.242359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.242722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.242761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.245284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.245357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.245799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.245838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.246092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.246127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.246476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.246505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.246859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.246890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.247270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.247298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.247653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.247683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.248053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.248085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.248336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.248366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.248721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.248776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.249128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.249158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.249520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.249548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.249908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.249938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.250296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.250326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.250696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.250726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.251153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.251182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.251546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.251574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.251939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.251968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.252311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.252340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.252686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.252715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.253085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.253116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.253474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.253503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.253850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.253880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.254261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.254289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.185 [2024-11-06 14:11:42.254660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.185 [2024-11-06 14:11:42.254689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.185 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.255069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.255099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.255458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.255487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.255859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.255890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.256260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.256288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.256650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.256680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.257037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.257067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.257425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.257456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.257811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.257841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.258217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.258251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.258605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.258634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.258972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.259002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.259235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.259263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.259611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.259642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.259990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.260020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.260392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.260420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.260776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.260805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.261171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.261200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.261505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.261532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.261886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.261918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.262262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.262293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.262652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.262679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.263047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.263077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.263314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.263343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.263695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.263724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.264063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.264094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.264380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.264410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.264651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.264679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.265041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.265071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.265438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.265468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.265830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.265860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.266265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.266294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.266672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.266702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.267066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.267097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.267461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.267489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.267846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.267875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.268233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.268270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.268614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.268644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.269046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.186 [2024-11-06 14:11:42.269076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.186 qpair failed and we were unable to recover it. 00:29:56.186 [2024-11-06 14:11:42.269417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.269446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.269843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.269874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.270221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.270251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.270620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.270648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.270894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.270927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.271273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.271302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.271641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.271669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.271927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.271957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.272337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.272366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.272738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.272777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.273131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.273160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.273521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.273551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.273915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.273944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.274310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.274338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.274703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.274731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.275098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.275127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.275488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.275517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.275862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.275891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.276271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.276299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.276561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.276589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.276928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.276957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.277139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.277167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.277532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.277560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.277907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.277939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.278283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.278318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.278675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.278703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.279101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.279131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.279491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.279520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.279876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.279905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.280286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.280314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.280681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.280709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.281074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.281104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.281475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.281503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.281807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.281837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.282201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.282229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.282590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.187 [2024-11-06 14:11:42.282618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.187 qpair failed and we were unable to recover it. 00:29:56.187 [2024-11-06 14:11:42.282864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.282896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.283276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.283305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.283612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.283649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.284006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.284036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.284391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.284420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.284786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.284815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.285175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.285203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.285556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.285584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.285972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.286003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.286361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.286389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.286762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.286791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.287144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.287172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.287521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.287550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.287917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.287946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.288304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.288331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.288708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.288737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.289049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.289078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.289437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.289467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.289817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.289848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.290213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.290242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.290624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.290653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.291020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.291050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.291420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.291449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.291819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.291848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.292312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.292340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.292708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.292737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.293069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.293098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.293468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.293497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.293860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.293890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.294200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.294241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.294485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.294518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.294903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.294933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.295288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.295316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.295673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.295701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.296069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.296098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.296538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.296568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.296814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.296843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.297135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.297163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.297541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.297570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.297917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.297947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.298183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.298216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.298577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.188 [2024-11-06 14:11:42.298606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.188 qpair failed and we were unable to recover it. 00:29:56.188 [2024-11-06 14:11:42.298980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.299010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.299265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.299295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.299649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.299679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.300071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.300101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.300470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.300501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.300867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.300895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.301243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.301272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.301640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.301668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.302069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.302099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.302464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.302492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.302865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.302895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.303342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.303372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.303723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.303758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.304088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.304117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.304483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.304517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.304865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.304895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.305285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.305312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.305685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.305714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.306092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.306121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.306484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.306511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.306876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.306906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.307273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.307301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.307656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.307685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.308036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.308066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.308466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.308495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.308754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.308784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.309172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.309200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.309565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.309593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.309909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.309939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.310262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.310292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.310678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.310706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.311126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.311156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.311511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.311540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.311886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.311916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.312168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.312200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.312576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.312605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.312877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.312907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.313276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.313304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.313680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.313709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.314074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.314103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.314464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.314492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.314863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.189 [2024-11-06 14:11:42.314898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.189 qpair failed and we were unable to recover it. 00:29:56.189 [2024-11-06 14:11:42.315244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.315273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.315646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.315675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.316045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.316075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.316436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.316464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.316827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.316856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.317204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.317233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.317609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.317637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.317997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.318028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.318371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.318399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.318767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.318797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.319048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.319077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.319431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.319459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.319817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.319847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.320242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.320271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.320626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.320654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.321002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.321031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.321394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.321423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.321786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.321815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.322158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.322187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.322519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.322549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.322912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.322941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.323315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.323343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.323697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.323725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.324110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.324140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.324504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.324534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.324901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.324930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.325345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.325373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.325715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.325743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.326093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.326121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.326487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.326516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.326887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.326916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.327278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.327306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.327669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.327697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.328073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.328102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.328463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.328491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.328856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.328885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.329252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.329281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.329715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.329743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.330093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.330122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.330489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.330517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.330887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.190 [2024-11-06 14:11:42.330918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.190 qpair failed and we were unable to recover it. 00:29:56.190 [2024-11-06 14:11:42.331292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.331321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.331688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.331717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.332117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.332146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.332509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.332539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.332900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.332930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.333303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.333331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.333698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.333728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.334083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.334111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.334475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.334503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.334865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.334894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.335248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.335278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.335639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.335667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.335936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.335964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.336349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.336378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.336764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.336794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.337150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.337177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.337536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.337564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.337925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.337956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.338330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.338358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.338619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.338646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.339002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.339032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.339391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.339419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.339785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.339814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.340174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.340202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.340546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.340573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.340933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.340963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.341319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.341353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.341696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.341725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.341989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.342022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.342410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.342438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.342807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.342838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.343213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.343242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.343596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.343624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.343992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.344022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.344381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.344410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.344765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.344795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.345033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.345061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.345401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.345429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.345775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.345804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.346175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.346203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.346455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.346484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.191 [2024-11-06 14:11:42.346734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.191 [2024-11-06 14:11:42.346781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.191 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.347143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.347172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.347521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.347551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.347917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.347949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.348313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.348341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.348672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.348702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.349144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.349174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.349507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.349544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.349887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.349916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.350258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.350288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.350639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.350667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.351037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.351068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.351320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.351360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.351644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.351675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.352035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.352067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.352431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.352461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.352851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.352881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.353184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.353213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.353578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.353607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.353890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.353919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.354274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.354302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.354672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.354702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.354985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.355015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.355272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.355304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.355673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.355704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.356069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.356100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.356460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.356491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.356849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.356880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.357239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.357268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.357656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.357684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.358060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.358090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.358446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.358475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.358842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.358874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.359227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.359257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.359629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.359657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.360017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.360048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.360396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.360424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.360793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.360825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.361178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.361207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.361561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.361595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.362000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.192 [2024-11-06 14:11:42.362029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.192 qpair failed and we were unable to recover it. 00:29:56.192 [2024-11-06 14:11:42.362392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.362422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.362668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.362696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.363124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.363156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.363553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.363584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.363970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.364000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.364399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.364429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.364771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.364801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.365065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.365097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.365470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.365500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.365868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.365898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.366271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.366300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.366673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.366701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.367105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.367139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.367576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.367604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.367966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.367995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.368354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.368385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.368722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.368762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.369112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.369143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.369508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.369536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.369894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.369924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.370287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.370315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.370667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.370697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.371087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.371117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.371477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.371507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.371955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.371985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.372315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.372345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.372783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.372813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.373171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.373200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.373558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.373586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.373938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.373971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.374330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.374358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.374780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.374809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.375174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.375202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.375541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.375569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.375911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.375940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.376284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.376312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.376706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.376735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.377101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.377130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.377494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.377523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.377887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.377925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.378282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.378311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.378679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.378708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.379085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.379116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.379475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.379505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.379765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.379796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.193 qpair failed and we were unable to recover it. 00:29:56.193 [2024-11-06 14:11:42.380134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.193 [2024-11-06 14:11:42.380163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.380528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.380556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.380920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.380949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.381306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.381334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.381501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.381535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.381945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.381976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.382338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.382366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.382631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.382659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.383050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.383083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.383436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.383464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.383827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.383859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.384268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.384298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.384652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.384682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.385044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.385077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.385438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.385467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.385823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.385853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.386091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.386124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.386477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.386505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.386864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.386895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.387247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.387275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.387681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.387710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.388090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.388127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.388481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.388511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.388720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.388758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.389108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.389136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.389502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.389531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.389889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.389919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.390284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.390312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.390682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.390710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.391060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.391090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.391469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.391498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.391862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.391892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.392259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.392287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.392583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.392611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.392860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.392889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.393264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.393292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.393662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.393691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.394138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.394167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.394534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.394562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.394822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.394852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.395250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.395279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.395647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.395675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.395940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.395969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.396355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.396384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.396754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.396784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.194 [2024-11-06 14:11:42.397137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.194 [2024-11-06 14:11:42.397166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.194 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.397518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.397546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.397831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.397860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.398238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.398277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.398614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.398642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.399077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.399107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.399467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.399494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.399857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.399886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.400246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.400274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.400641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.400669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.401031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.401060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.401433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.401462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.401826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.401856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.402304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.402332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.402661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.402690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.403055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.403085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.403452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.403481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.403850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.403880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.404220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.404248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.404606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.404635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.404986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.405015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.405379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.405408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.405773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.405803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.406175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.406203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.406551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.406580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.195 [2024-11-06 14:11:42.406955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.195 [2024-11-06 14:11:42.406985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.195 qpair failed and we were unable to recover it. 00:29:56.470 [2024-11-06 14:11:42.407340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-11-06 14:11:42.407371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-11-06 14:11:42.407729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-11-06 14:11:42.407769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-11-06 14:11:42.408094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-11-06 14:11:42.408123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-11-06 14:11:42.408376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-11-06 14:11:42.408404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-11-06 14:11:42.408768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-11-06 14:11:42.408797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-11-06 14:11:42.409162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.409191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.409556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.409584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.409930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.409961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.410325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.410353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.410736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.410774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.411029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.411056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.411418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.411446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.411815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.411844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.412217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.412244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.412623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.412653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.413001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.413030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.413401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.413429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.413794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.413823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.414214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.414243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.414607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.414635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.414989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.415019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.415384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.415411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.415626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.415654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.415995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.416024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.416263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.416295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.416659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.416687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.417033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.417063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.417424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.417451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.417824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.417854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.418247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.418275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.418528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.418556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.418905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.418934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.419177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.419206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.419559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.419587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.419962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.419991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.420355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.420383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.420765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.420793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.421146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.421175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.421618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.421647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.421991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.422021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.422394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.422423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.422768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.422800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.423194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-11-06 14:11:42.423224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-11-06 14:11:42.423583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.423611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.423983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.424012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.424375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.424409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.424766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.424797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.425159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.425186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.425553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.425583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.425930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.425959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.426332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.426359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.426726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.426775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.427129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.427158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.427553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.427581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.427943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.427972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.428330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.428360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.428697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.428725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.429002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.429030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.429379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.429408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.429814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.429844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.430201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.430230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.430607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.430635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.431020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.431049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.431410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.431437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.431782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.431811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.432152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.432180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.432528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.432555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.432920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.432951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.433317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.433346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.433716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.433743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.434127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.434157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.434495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.434524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.434887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.434924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.435263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.435291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.435663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.435691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.436038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.436068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.436429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.436457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.436825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.436854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.437138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.437166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.437529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.437558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.437793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.437822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-11-06 14:11:42.438185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-11-06 14:11:42.438212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.438561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.438589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.438954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.438984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.439327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.439355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.439733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.439769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.440128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.440157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.440574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.440602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.440861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.440890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.441245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.441273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.441646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.441673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.442038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.442067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.442443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.442471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.442733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.442778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.443046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.443075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.443456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.443484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.443839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.443870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.444206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.444234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.444598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.444626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.444993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.445022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.445348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.445379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.445669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.445698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.446030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.446060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.446419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.446448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.446804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.446833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.447284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.447314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.447674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.447702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.448061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.448091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.448449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.448477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.448823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.448853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.449102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.449130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.449511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.449539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.449877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.449914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.450249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.450278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.450628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.450658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.451013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.451044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.451275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.451303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.451661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.451689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.452056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.452086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.452352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-11-06 14:11:42.452380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-11-06 14:11:42.452728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.452767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.453128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.453158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.453533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.453561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.453913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.453943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.454307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.454335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.454697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.454726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.455091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.455120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.455429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.455458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.455822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.455851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.456232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.456260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.456594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.456623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.456987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.457017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.457450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.457478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.457727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.457766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.458128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.458157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.458494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.458523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.458885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.458915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.459270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.459299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.459630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.459658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.460009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.460040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.460369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.460402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.460767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.460797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.461201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.461230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.461604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.461631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.461870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.461904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.462285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.462314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.462664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.462694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.463050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.463079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.463441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.463469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.463824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.463854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.464094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.464123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.464480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.464510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.464932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.464961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.465191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.465219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.465601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.465630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.466005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.466035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.466385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.466415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.466783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.466812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-11-06 14:11:42.467178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-11-06 14:11:42.467205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.467564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.467593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.467925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.467955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.468312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.468340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.468703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.468731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.468944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.468973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.469327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.469356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.469725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.469761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.470110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.470140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.470505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.470539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.470882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.470911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.471228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.471257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.471620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.471648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.471990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.472020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.472261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.472290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.472643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.472671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.473018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.473049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.473411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.473439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.473795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.473826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.474209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.474238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.474600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.474629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.474894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.474924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.475289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.475319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.475674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.475704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.476107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.476137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.476377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.476410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.476806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.476836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.477210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.477238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.477577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.477605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.477971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.478002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.478361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.478390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.478756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.478786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.479148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.479176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-11-06 14:11:42.479527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-11-06 14:11:42.479555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.479817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.479846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.480247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.480275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.480655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.480689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.481050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.481079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.481434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.481462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.481830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.481860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.482213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.482241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.482603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.482631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.482881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.482911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.483289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.483317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.483685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.483713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.484164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.484193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.484544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.484572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.484932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.484961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.485329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.485359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.485722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.485761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.486166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.486195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.486533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.486563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.486818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.486852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.487223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.487251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.487614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.487642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.487897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.487926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.488295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.488323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.488682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.488710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.489081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.489111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.489457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.489485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.489764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.489793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.490177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.490205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.490574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.490605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.490975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.491005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.491376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.491406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.491767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.491796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.492158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.492186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.492566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.492595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.492888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.492916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.493272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.493300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.493664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.493691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.494072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.494101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-11-06 14:11:42.494463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-11-06 14:11:42.494492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.494861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.494891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.495246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.495274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.495636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.495664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.496108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.496137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.496497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.496527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.496900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.496929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.497301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.497329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.497688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.497716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.498089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.498118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.498486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.498515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.498876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.498905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.499254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.499284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.499664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.499691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.500038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.500067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.500323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.500352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.500701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.500729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.501101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.501131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.501489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.501518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.501886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.501916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.502172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.502200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.502552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.502580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.502975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.503006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.503366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.503394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.503647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.503674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.504009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.504038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.504398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.504426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.504832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.504860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.505205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.505234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.505611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.505639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.505991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.506020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.506403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.506431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.506688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.506724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.507084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.507113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.507335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.507362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.507743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.507783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.508104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.508132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.508500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-11-06 14:11:42.508528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-11-06 14:11:42.508792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.508822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.509191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.509220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.509588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.509616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.509975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.510005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.510363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.510391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.510767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.510798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.511216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.511244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.511604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.511632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.511979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.512009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.512256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.512288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.512654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.512682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.513129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.513159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.513522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.513550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.513913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.513942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.514311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.514340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.514675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.514702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.514951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.514985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.515349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.515377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.515683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.515711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.516087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.516116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.516483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.516510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.516867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.516903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.517297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.517325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.517580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.517608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.517982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.518011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.518378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.518406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.518772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.518800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.519131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.519159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.519552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.519581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.519957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.519986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.520354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.520381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.520744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.520783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.521160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.521188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.521544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.521573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.521941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.521970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.522328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.522357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.522609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.522637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.522989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.523019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-11-06 14:11:42.523396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-11-06 14:11:42.523424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.523794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.523823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.524180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.524209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.524582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.524610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.524873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.524902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.525251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.525279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.525648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.525677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.525948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.525977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.526333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.526362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.526632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.526660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.527047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.527077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.527474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.527503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.527850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.527880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.528231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.528259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.528622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.528652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.528900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.528933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.529313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.529342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.529717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.529756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.530161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.530190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.530545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.530574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.530936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.530967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.531338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.531368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.531738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.531776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.532120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.532155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.532560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.532589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.532942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.532973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.533344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.533372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.533737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.533773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.534148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.534176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.534494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.534521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.534885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.534918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.535246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.535274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.535572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.535608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.536011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.536041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.536414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-11-06 14:11:42.536442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-11-06 14:11:42.536802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.536830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.537195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.537223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.537654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.537682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.538054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.538084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.538444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.538472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.538838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.538868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.539224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.539252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.539608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.539635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.539972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.540001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.540331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.540361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.540718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.540754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.541119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.541147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.541543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.541571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.541935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.541964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.542322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.542351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.542686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.542714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.543132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.543169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.543412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.543444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.543791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.543822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.544132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.544160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.544504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.544532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.544889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.544917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.545275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.545305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.545673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.545701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.545966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.545996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.546341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.546370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.546731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.546769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.547015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.547043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.547274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.547306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.547687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.547716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.548110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.548140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.548508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.548536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.548888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.548919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.549250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-11-06 14:11:42.549279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-11-06 14:11:42.549656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.549684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.550028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.550057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.550304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.550335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.550693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.550721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.551109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.551138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.551498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.551526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.551946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.551975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.552332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.552359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.552644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.552672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.553036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.553077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.553414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.553443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.553776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.553805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.554159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.554187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.554551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.554579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.554936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.554965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.555327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.555355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.555722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.555764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.556139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.556169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.556518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.556554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.556926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.556957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.557319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.557349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.557715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.557744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.558091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.558119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.558479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.558507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.558869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.558898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.559133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.559165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.559546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.559574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.559938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.559971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.560330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.560358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.560601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.560629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.561002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.561032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.561389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.561417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.561783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.561813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.562174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.562202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.562562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.562591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.562888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.562917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.563281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.563315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.563755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.563784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.564149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-11-06 14:11:42.564176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-11-06 14:11:42.564539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.564566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.564918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.564948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.565286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.565314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.565677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.565704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.565896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.565925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.566313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.566341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.566592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.566620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.566838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.566868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.567200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.567229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.567606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.567633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.567999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.568029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.568382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.568411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.568714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.568742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.569131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.569160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.569521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.569550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.569910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.569939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.570294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.570324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.570693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.570722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.571147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.571176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.571532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.571561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.571928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.571957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.572296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.572326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.572683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.572711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.573077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.573106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.573465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.573493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.573841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.573871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.574318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.574348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.574675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.574705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.575041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.575070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.575426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.575455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.575812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.575842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.576194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.576222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.576597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.576625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.576991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.577020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.577378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.577408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.577769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.577799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.578120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.578149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.578523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-11-06 14:11:42.578552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-11-06 14:11:42.578828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8f30 is same with the state(6) to be set 00:29:56.482 [2024-11-06 14:11:42.579530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.579632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.580107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.580212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.580666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.580702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.581236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.581337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.581781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.581820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.582174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.582206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.582459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.582493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.583019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.583121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.583594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.583633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.583917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.583949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.584366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.584395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.584770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.584800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.585181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.585211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.585574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.585603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.585965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.585996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.586359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.586388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.586760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.586790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.587176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.587204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.587456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.587485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.587939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.587969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.588233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.588262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.588607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.588635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.588888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.588923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.589267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.589298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.589639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.589669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.590026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.590056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.590416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.590452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.590836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.590865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.591241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.591269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.591634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.591663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.592024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.592053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.592422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.592451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.592705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.592734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.593122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.593151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.593513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.593542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-11-06 14:11:42.593889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-11-06 14:11:42.593918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.594263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.594292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.594653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.594683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.595045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.595074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.595429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.595459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.595818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.595850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.596236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.596265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.596640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.596668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.597003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.597033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.597379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.597407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.597717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.597757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.598119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.598148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.598549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.598579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.599010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.599041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.599387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.599415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.599805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.599834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.600088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.600121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.600561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.600590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.600930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.600959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.601323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.601352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.601691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.601718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.602130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.602159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.602429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.602458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.602811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.602840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.603084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.603117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.603371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.603405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.603806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.603837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.604201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.604230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.604612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.604641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.604911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.604941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.605302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.605330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.605699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.605736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.606108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-11-06 14:11:42.606137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-11-06 14:11:42.606373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.606406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.606781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.606813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.607092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.607121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.607477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.607506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.607861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.607891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.608263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.608291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.608655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.608685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.609054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.609085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.609346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.609374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.609786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.609816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.610061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.610093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.610352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.610382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.610730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.610770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.611171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.611201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.611581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.611610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.611881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.611911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.612289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.612319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.612675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.612705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.613092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.613122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.613484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.613514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.613893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.613924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.614298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.614327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.614679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.614708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.614980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.615009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.615360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.615389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.615768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.615800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.616050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.616078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.616426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.616454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.616698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.616731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.616977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.617010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.617372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.617401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.617770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.617802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.618093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.618121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.618462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.618492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.618842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.618872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.619240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.619269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.619632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.619660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.619910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.619944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-11-06 14:11:42.620351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-11-06 14:11:42.620386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.620723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.620759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.621148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.621178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.621544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.621574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.621937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.621966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.622426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.622454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.622812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.622844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.623202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.623231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.623488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.623521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.623861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.623892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.624259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.624288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.624660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.624688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.625111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.625143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.625435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.625462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.625717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.625758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.626021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.626049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.626396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.626423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.626659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.626690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.627061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.627092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.627471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.627501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.627868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.627898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.628251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.628280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.628630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.628658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.629002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.629040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.629388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.629417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.629777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.629807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.630186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.630215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.630583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.630612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.630983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.631013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.631371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.631402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.631833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.631862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.632246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.632276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.632640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.632669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.633029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.633061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.633434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.633464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.633826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.633858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.634249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.634277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.634639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.634670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-11-06 14:11:42.635027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-11-06 14:11:42.635057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.635416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.635444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.635814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.635852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.636210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.636240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.636602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.636631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.636985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.637015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.637376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.637406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.637764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.637794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.638158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.638187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.638563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.638593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.638967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.638997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.639358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.639388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.639762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.639793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.640176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.640205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.640330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.640361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.640782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.640814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.641169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.641199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.641559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.641589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.641955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.641986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.642358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.642387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.642762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.642792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.643188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.643217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.643567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.643595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.643957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.643989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.644357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.644386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.644739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.644778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.645143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.645173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.645520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.645548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.645797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.645832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.646087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.646117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.646492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.646520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.646895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.646925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.647279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.647308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.647653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.647682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.648044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.648074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.648436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.648466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.648726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.648775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.649157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.649185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.649548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-11-06 14:11:42.649576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-11-06 14:11:42.649999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.650031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.650381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.650410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.650778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.650808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.651162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.651197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.651540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.651569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.651921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.651951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.652314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.652346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.652707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.652735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.653097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.653127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.653501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.653531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.653886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.653917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.654269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.654300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.654660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.654689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.655067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.655097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.655453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.655480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.655819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.655850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.656265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.656295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.656650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.656679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.657043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.657073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.657381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.657409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.657766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.657796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.658188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.658218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.658591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.658619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.658997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.659027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.659384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.659412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.659783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.659832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.660177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.660208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.660565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.660593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.660942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.660972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.661316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.661345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.661686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.661715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.662077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.662106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.662474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.662502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.662767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.662797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.663146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.663174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.663536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.663564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.663927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.663956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.664204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.664232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-11-06 14:11:42.664525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-11-06 14:11:42.664553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.664857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.664887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.665257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.665287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.665642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.665670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.665936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.665965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.666327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.666361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.666732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.666768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.667176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.667204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.667576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.667605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.668032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.668061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.668421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.668449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.668805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.668834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.669210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.669237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.669619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.669647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.670019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.670049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.670415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.670444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.670684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.670716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.671099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.671129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.671358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.671389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.671770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.671801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.672167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.672195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.672504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.672531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.672904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.672934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.673290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.673318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.673674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.673702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.674019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.674049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.674421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.674449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.674815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.674845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.675198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.675225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.675626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.675654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.676010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.676041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.676411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.676438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.676805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.676836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-11-06 14:11:42.677202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-11-06 14:11:42.677230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.677595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.677623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.678000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.678030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.678393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.678421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.678801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.678830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.679206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.679234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.679600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.679628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.680039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.680068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.680420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.680447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.680711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.680765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.681166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.681195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.681559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.681587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.681973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.682009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.682377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.682405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.682777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.682805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.683178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.683206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.683562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.683590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.684021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.684051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.684407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.684442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.684669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.684700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.684980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.685009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.685369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.685397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.685760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.685791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.686134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.686162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.686484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.686511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.686882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.686911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.687154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.687185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.687546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.687575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.687948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.687978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.688339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.688367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.688787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.688817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.689171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.689201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.689567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.689598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.689838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.689867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.690239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.690270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.690620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.690648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.691012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.691042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.691403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.691432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.691800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-11-06 14:11:42.691831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-11-06 14:11:42.692209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.692237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.692602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.692631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.692985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.693015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.693228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.693259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.693612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.693641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.693985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.694015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.694386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.694415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.694765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.694795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.695159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.695187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.695548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.695576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.695951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.695981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.696422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.696450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.696781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.696810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.697158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.697186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.697586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.697615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.697974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.698004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.698364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.698393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.698811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.698840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.699200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.699228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.699668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.699697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.700045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.700074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.700433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.700461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.700834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.700865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.701229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.701257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.701625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.701652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.701914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.701943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.702324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.702352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.702719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.702756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.703131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.703159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.703521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.703550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.703820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.703848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.704239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.704267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.704626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.704655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.704999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.705030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.705391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.705420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.705790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.705820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.706208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.706235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.706601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.706629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-11-06 14:11:42.706869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-11-06 14:11:42.706902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.707318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.707346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.707705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.707739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.708185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.708214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.708573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.708608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.708993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.709024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.709394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.709422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.709787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.709816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.710181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.710209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.710407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.710435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.710820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.710849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.711244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.711272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.711637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.711665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.712030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.712059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.712420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.712449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.712825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.712855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.713222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.713250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.713603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.713631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.714007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.714036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.714405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.714434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.714802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.714831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.715208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.715236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.715585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.715613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.715850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.715882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.716237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.716265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.716622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.716650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.717011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.717041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.717404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.717432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.717691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.717722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.718093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.718122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.718490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.718518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.718892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.718921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.719282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.719310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.719719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.719754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.720122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.720150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.720521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.720549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.720911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.720941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.721327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.721355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-11-06 14:11:42.721722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-11-06 14:11:42.721758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.722111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.722139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.722509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.722537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.722911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.722940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.723304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.723338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.723632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.723660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.723999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.724029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.724329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.724357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.724727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.724766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.725132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.725161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.725334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.725365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.725728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.725768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.726168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.726196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.726553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.726580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.726928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.726958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.727321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.727351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.727679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.727707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.727969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.727998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.728373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.728402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.728783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.728813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.729197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.729226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.729588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.729616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.729879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.729908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.730285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.730313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.730691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.730719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.731087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.731116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.731371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.731403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.731778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.731809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.732215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.732243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.732545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.732581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.732966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.732995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.733259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.733287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.733640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.733668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-11-06 14:11:42.734002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-11-06 14:11:42.734033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.734397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.734429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.734787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.734818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.735157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.735185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.735437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.735465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.735838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.735869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.736269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.736296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.736651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.736681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.737034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.737064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.737441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.737470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.737832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.737860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.738225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.738259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.738658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.738685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.738934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.738963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.739310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.739338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.739710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.739739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.739991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.740024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.740374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.740404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.740769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.740799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.741234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.741262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.741663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.741691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.741985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.742014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.742382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.742411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.742803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.742831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.743190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.743225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.743594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.743622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.744005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.744035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.744405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.744434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.744812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.744842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.745206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.745236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.745606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.745634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.769 qpair failed and we were unable to recover it. 00:29:56.769 [2024-11-06 14:11:42.746017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.769 [2024-11-06 14:11:42.746046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.746415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.746443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.746832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.746861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.747129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.747156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.747509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.747536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.747888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.747937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.748174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.748207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.748579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.748609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.748972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.749003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.749365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.749393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.749762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.749794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.750170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.750198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.750537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.750565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.750912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.750942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.751343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.751371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.751700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.751730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.752045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.752074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.752438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.752466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.752826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.752856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.753214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.753249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.753579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.753613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.753940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.753970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.754319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.754347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.754722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.754756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.755123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.755150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.755522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.755553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.755913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.755943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.756312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.756339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.756662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.756690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.757063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.757093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.757341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.757369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.757731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.757768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.758067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.758095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.758466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.758495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.758788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.758818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.759226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.759254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.759489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.759520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.759910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.759940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.770 [2024-11-06 14:11:42.760304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.770 [2024-11-06 14:11:42.760333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.770 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.760729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.760767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.761063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.761091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.761444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.761476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.761834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.761862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.762233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.762261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.762633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.762661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.763018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.763048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.763399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.763427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.763671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.763703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.764075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.764104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.764467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.764495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.764919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.764948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.765237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.765264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.765633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.765661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.766006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.766037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.766409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.766436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.766705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.766732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.767097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.767125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.767501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.767530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.767886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.767917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.768283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.768311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.768682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.768715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.768962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.768995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.769367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.769394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.769765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.769793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.770151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.770179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.770549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.770579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.770935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.770964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.771336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.771364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.771730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.771790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.772170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.772199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.772561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.772588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.772996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.773026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.773346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.773375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.773798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.773827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.774196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.774225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.774595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.774624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.774855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.771 [2024-11-06 14:11:42.774887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.771 qpair failed and we were unable to recover it. 00:29:56.771 [2024-11-06 14:11:42.775247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.775276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.775641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.775670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.776071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.776101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.776455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.776482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.776725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.776764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.777144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.777172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.777541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.777569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.777930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.777959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.778324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.778353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.778712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.778741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.779087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.779115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.779483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.779511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.779879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.779908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.780288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.780316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.780679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.780706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.780933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.780965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.781245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.781272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.781690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.781718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.781898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.781926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.782310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.782338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.782696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.782725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.783095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.783124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.783485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.783514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.783846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.783883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.784252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.784280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.784647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.784675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.785048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.785077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.785415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.785443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.785735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.785772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.786171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.786199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.786562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.786590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.786932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.786962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.787196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.787228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.787668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.787696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.788032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.788061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.788422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.788450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.788811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.788841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.789229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.772 [2024-11-06 14:11:42.789257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.772 qpair failed and we were unable to recover it. 00:29:56.772 [2024-11-06 14:11:42.789492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.789520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.789893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.789921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.790271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.790299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.790661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.790689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.790993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.791022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.791382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.791409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.791784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.791815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.792173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.792201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.792553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.792582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.792935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.792964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.793339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.793368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.793727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.793763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.794116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.794145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.794519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.794547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.794959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.794988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.795359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.795387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.795768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.795799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.796151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.796180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.796531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.796560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.796813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.796843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.797197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.797225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.797601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.797629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.798004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.798032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.798393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.798421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.798811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.798840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.799223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.799257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.799618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.799647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.799908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.799938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.800198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.800225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.800573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.800601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.801017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.801047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.801492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.773 [2024-11-06 14:11:42.801521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.773 qpair failed and we were unable to recover it. 00:29:56.773 [2024-11-06 14:11:42.801874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.801903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.802217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.802244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.802621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.802649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.803032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.803061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.803419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.803449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.803790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.803819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.804208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.804237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.804531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.804559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.804850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.804879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.805256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.805284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.805530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.805561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.805928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.805957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.806324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.806354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.806716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.806744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.807110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.807138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.807498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.807526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.807800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.807830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.808226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.808253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.808495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.808527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.808772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.808802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.809036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.809068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.809439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.809467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.809715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.809744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.810126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.810154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.810461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.810489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.810827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.810856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.811110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.811138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.811486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.811513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.811782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.811811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.812083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.812110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.812479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.812509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.812852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.812882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.813260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.813288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.813647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.813680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.814037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.814066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.814445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.814473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.814820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.814848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.815217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.815245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.774 [2024-11-06 14:11:42.815616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.774 [2024-11-06 14:11:42.815645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.774 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.815898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.815928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.816298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.816326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.816697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.816724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.817108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.817137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.817500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.817528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.817943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.817975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.818313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.818340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.818699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.818727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.819110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.819139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.819508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.819537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.819885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.819914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.820291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.820319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.820762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.820791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.821158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.821185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.821547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.821575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.821837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.821866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.822301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.822329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.822707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.822735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.823076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.823104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.823470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.823498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.823873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.823904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.824334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.824362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.824720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.824756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.825006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.825038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.825404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.825432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.825793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.825824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.826180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.826209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.826570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.826599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.826850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.826882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.827257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.827285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.827637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.827665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.828086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.828115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.828461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.828491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.828873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.828902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.829260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.829296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.829541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.829569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.829929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.775 [2024-11-06 14:11:42.829960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.775 qpair failed and we were unable to recover it. 00:29:56.775 [2024-11-06 14:11:42.830318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.830346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.830723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.830758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.831134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.831161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.831372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.831403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.831778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.831808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.832043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.832075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.832520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.832550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.832903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.832932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.833314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.833342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.833675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.833703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.834119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.834148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.834388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.834420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.834820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.834850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.835206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.835235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.835636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.835665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.835924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.835953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.836296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.836324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.836684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.836713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.837097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.837127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.837486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.837516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.837898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.837927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.838273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.838301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.838668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.838696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.839052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.839083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.839451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.839480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.839858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.839889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.840080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.840110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.840517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.840545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.840941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.840973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.841350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.841378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.841714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.841743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.842098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.842127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.842490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.842518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.842951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.842980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.843333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.843362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.843740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.843791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.844158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.844187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.844432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.776 [2024-11-06 14:11:42.844471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.776 qpair failed and we were unable to recover it. 00:29:56.776 [2024-11-06 14:11:42.844814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.844845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.845199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.845229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.845581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.845612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.845986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.846018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.846377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.846408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.846762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.846793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.847190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.847219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.847646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.847676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.848044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.848075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.848441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.848471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.848838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.848869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.849223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.849252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.849385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.849418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.849684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.849715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.850144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.850177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.850523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.850553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.850932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.850965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.851333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.851363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.851712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.851742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.852160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.852190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.852544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.852573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.852929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.852961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.853329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.853358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.853720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.853756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.854127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.854158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.854550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.854580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.854975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.855006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.855363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.855394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.855773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.855803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.856035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.856068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.856316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.856345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.856717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.856758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.857004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.857035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.857371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.857398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.857766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.857796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.858210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.858239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.858593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.858621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.859011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.859042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.777 [2024-11-06 14:11:42.859401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.777 [2024-11-06 14:11:42.859430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.777 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.859868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.859906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.860261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.860288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.860531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.860563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.860983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.861013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.861181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.861211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.861610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.861639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.862001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.862030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.862378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.862406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.862769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.862800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.863170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.863198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.863488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.863514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.863885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.863915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.864139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.864169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.864528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.864557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.864938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.864970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.865260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.865288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.865685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.865713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.866042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.866072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.866326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.866354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.866730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.866769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.867133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.867160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.867524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.867553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.867941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.867972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.868342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.868372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.868723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.868762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.869129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.869157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.869426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.869453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.869796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.869827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.870177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.870204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.870475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.870503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.870861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.870891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.871251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.871280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.871643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.778 [2024-11-06 14:11:42.871672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.778 qpair failed and we were unable to recover it. 00:29:56.778 [2024-11-06 14:11:42.872042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.872071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.872454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.872484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.872825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.872854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.873109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.873137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.873489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.873517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.873900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.873931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.874279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.874307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.874678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.874713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.875122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.875151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.875513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.875541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.875983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.876013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.876369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.876397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.876680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.876708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.877061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.877091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.877455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.877484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.877856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.877885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.878265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.878296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.878653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.878686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.879109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.879139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.879484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.879512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.879874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.879904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.880297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.880329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.880554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.880584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.880955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.880984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.881348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.881376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.881729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.881767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.882138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.882167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.882412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.882443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.882804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.882834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.883269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.883297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.883542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.883569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.883925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.883956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.884315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.884344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.884707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.884734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.885100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.885131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.885493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.885523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.885894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.885923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.886291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.779 [2024-11-06 14:11:42.886320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.779 qpair failed and we were unable to recover it. 00:29:56.779 [2024-11-06 14:11:42.886704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.886734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.886943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.886975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.887327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.887355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.887732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.887773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.888127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.888155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.888406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.888435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.888788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.888818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.889068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.889098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.889471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.889499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.889885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.889921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.890307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.890337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.890692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.890721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.891095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.891125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.891490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.891517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.891807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.891836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.892198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.892226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.892598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.892625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.892982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.893013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.893237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.893265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.893630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.893660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.894011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.894040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.894269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.894300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.894577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.894605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.894989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.895019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.895377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.895407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.895804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.895835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.896205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.896233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.896598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.896626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.896997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.897026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.897398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.897427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.897696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.897725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.898098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.898127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.898492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.898519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.898884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.898914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.899272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.899302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.899687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.899716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.900094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.900131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.900461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.900489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.780 qpair failed and we were unable to recover it. 00:29:56.780 [2024-11-06 14:11:42.900875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.780 [2024-11-06 14:11:42.900905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.901263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.901292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.901663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.901692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.902058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.902090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.902427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.902458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.902682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.902713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.903144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.903173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.903527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.903555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.903923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.903952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.904297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.904326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.904667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.904695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.905069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.905098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.905457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.905485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.905867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.905897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.906247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.906276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.906648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.906676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.907017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.907047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.907411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.907439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.907871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.907900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.908264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.908293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.908637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.908665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.909011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.909041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.909384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.909412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.909677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.909705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.910066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.910096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.910426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.910455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.910691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.910720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.910980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.911012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.911394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.911423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.911870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.911901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.912276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.912304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.912665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.912693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.913060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.913089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.913449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.913476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.913842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.913871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.914238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.914266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.914616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.914644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.915072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.781 [2024-11-06 14:11:42.915101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.781 qpair failed and we were unable to recover it. 00:29:56.781 [2024-11-06 14:11:42.915347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.915391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.915785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.915816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.916069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.916098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.916474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.916502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.916862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.916892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.917273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.917302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.917707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.917735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.918079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.918117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.918470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.918498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.918864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.918895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.919250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.919278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.919645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.919674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.920072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.920101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.920458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.920486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.920873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.920904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.921158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.921186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.921423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.921455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.921801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.921830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.922194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.922222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.922586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.922614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.923079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.923109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.923336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.923367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.923651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.923680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.923938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.923967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.924344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.924374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.924764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.924793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.925037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.925065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.925431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.925460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.925821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.925851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.926126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.926154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.926514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.926551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.926890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.926919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.927331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.927358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.927705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.927736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.782 qpair failed and we were unable to recover it. 00:29:56.782 [2024-11-06 14:11:42.928113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.782 [2024-11-06 14:11:42.928143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.928501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.928530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.928770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.928802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.929066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.929095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.929463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.929492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.929908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.929939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.930283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.930319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.930665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.930694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.931052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.931083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.931326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.931359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.931700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.931728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.931971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.932004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.932379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.932408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.932775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.932804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.933198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.933226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.933586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.933615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.933986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.934016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.934417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.934445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.934808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.934838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.935238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.935267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.935529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.935557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.935905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.935934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.936303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.936331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.936697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.936724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.936970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.937002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.937356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.937386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.937723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.937762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.938126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.938156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.938526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.938554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.938904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.938935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.939308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.939337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.939594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.939622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.940033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.940063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.940448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.940477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.940843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.940873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.941240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.941268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.941665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.941692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.942041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.942069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.783 [2024-11-06 14:11:42.942415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.783 [2024-11-06 14:11:42.942442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.783 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.942805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.942835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.943193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.943220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.943580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.943610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.943987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.944016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.944377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.944405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.944668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.944697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.945051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.945080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.945451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.945485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.945811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.945841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.946064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.946095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.946454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.946482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.946848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.946879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.947240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.947268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.947637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.947666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.948052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.948082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.948420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.948448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.948800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.948829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.949207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.949236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.949608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.949635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.949995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.950023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.950390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.950418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.950779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.950809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.951180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.951207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.951576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.951604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.952042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.952070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.952429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.952457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.952715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.952744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.953131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.953160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.953519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.953548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.953914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.953942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.954305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.954332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.954698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.954726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.954980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.955012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.955391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.955419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.955785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.955816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.956207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.956235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.956592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.784 [2024-11-06 14:11:42.956619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.784 qpair failed and we were unable to recover it. 00:29:56.784 [2024-11-06 14:11:42.957001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.957030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.957396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.957423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.957803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.957831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.958196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.958223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.958582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.958610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.958941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.958971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.959342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.959369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.959620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.959648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.960057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.960087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.960454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.960483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.960735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.960783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.961173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.961201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.961575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.961604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.961840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.961873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.962206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.962235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.962619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.962647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.962884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.962913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.963195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.963223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.963589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.963616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.963979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.964010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.964369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.964397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.964765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.964794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.965165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.965192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.965422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.965452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.965824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.965856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.966219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.966248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.966684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.966712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.967082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.967111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.967476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.967503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.967782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.967812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.968200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.968229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.968630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.968660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.969009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.969038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.969408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.969437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.969819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.969848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.970199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.970233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.970588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.970615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.785 [2024-11-06 14:11:42.970975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.785 [2024-11-06 14:11:42.971005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.785 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.971358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.971385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.971764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.971795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.972163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.972192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.972456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.972484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.972836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.972865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.973252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.973280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.973628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.973655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.974024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.974053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.974410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.974438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.974680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.974712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.975123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.975154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.975510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.975539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.975921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.975957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.976354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.976382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.976760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.976789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.977135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.977162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.977532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.977561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.977927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.977957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.978294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.978322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.978567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.978595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.978937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.978966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.979207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.979238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.979488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.979516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.979881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.979910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.980164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.980195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.980444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.980472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.980803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.980833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.981210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.981238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.981603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.981630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.981868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.981900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.982310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.982338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.982700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.982729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.982974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.983003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.983352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.983380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.983743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.983793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.984043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.984075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.984430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.984459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.984822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.786 [2024-11-06 14:11:42.984851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.786 qpair failed and we were unable to recover it. 00:29:56.786 [2024-11-06 14:11:42.985203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.985232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.985632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.985660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.985876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.985908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.986286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.986315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.986668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.986696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.987065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.987094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.987446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.987474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.987839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.987868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.988251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.988279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.988650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.988677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.989043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.989072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.989439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.989466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.989889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.989918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.990142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.990173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.990547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.990587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.990971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.991000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.991363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.991391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.991775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.991806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.992160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.992188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.992442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.992474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.992841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.992871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.993121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.993153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.993525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.993552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.993768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.993800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.994160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.994188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.994561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.994588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.994944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.994973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.995346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.995375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.995738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.995783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.996188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.996216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.996571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.996599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.996980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.997009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.997357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.997386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.997771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.787 [2024-11-06 14:11:42.997799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.787 qpair failed and we were unable to recover it. 00:29:56.787 [2024-11-06 14:11:42.998158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:42.998185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:42.998548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:42.998576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:42.998879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:42.998907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:42.999133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:42.999160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:42.999514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:42.999543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:42.999909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:42.999938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.000293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.000322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.000684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.000712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.001090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.001121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.001479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.001506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.001882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.001911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.002198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.002225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.002608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.002636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.002922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.002950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.003313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.003341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.003653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.003680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.004059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.004088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.004336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.004364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.004760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.004789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.005192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.005220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.005577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.005611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.005976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.006005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.006376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.006403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.006773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.006803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.007155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.007182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.007545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.007573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.007932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.007962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.008323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.008351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.008710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.008738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.009110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.009139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.009542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.009569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.009941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.009969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.010344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.010372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.010625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.010652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.011091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.011121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.011480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.011509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.011743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.011784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.012150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.788 [2024-11-06 14:11:43.012179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.788 qpair failed and we were unable to recover it. 00:29:56.788 [2024-11-06 14:11:43.012533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.012561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.012923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.012952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.013317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.013346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.013717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.013753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.013987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.014019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.014374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.014403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.014644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.014674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.015058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.015088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.015538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.015566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.015928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.015957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.016324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.016352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.016722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.016757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.017120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.017148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.017507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.017535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.017910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.017939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.018297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.018324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.018691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.018719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.019075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.019103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.019468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.019497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.019859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.019889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.020259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.020288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.020654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.020682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.021037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.021072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.021430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.021458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.021828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.021857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.022225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.022252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.022693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.022721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.023066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.023096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.023498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.023525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.023896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.023926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.024275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.024303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.024663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.024691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.025062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.025091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.025434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.025462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.025823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.025852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.026234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.026262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.026519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.026547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.026798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.026827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.789 qpair failed and we were unable to recover it. 00:29:56.789 [2024-11-06 14:11:43.027080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.789 [2024-11-06 14:11:43.027111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.790 qpair failed and we were unable to recover it. 00:29:56.790 [2024-11-06 14:11:43.027469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.790 [2024-11-06 14:11:43.027499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.790 qpair failed and we were unable to recover it. 00:29:56.790 [2024-11-06 14:11:43.027866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.790 [2024-11-06 14:11:43.027896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.790 qpair failed and we were unable to recover it. 00:29:56.790 [2024-11-06 14:11:43.028138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.790 [2024-11-06 14:11:43.028169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.790 qpair failed and we were unable to recover it. 00:29:56.790 [2024-11-06 14:11:43.028533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.790 [2024-11-06 14:11:43.028561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.790 qpair failed and we were unable to recover it. 00:29:56.790 [2024-11-06 14:11:43.028970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.790 [2024-11-06 14:11:43.028999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.790 qpair failed and we were unable to recover it. 00:29:56.790 [2024-11-06 14:11:43.029359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.790 [2024-11-06 14:11:43.029387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.790 qpair failed and we were unable to recover it. 00:29:56.790 [2024-11-06 14:11:43.029736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.790 [2024-11-06 14:11:43.029772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.790 qpair failed and we were unable to recover it. 00:29:56.790 [2024-11-06 14:11:43.030114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.790 [2024-11-06 14:11:43.030143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.790 qpair failed and we were unable to recover it. 00:29:56.790 [2024-11-06 14:11:43.030503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.790 [2024-11-06 14:11:43.030532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.790 qpair failed and we were unable to recover it. 00:29:56.790 [2024-11-06 14:11:43.030900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.790 [2024-11-06 14:11:43.030929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.790 qpair failed and we were unable to recover it. 00:29:56.790 [2024-11-06 14:11:43.031288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.790 [2024-11-06 14:11:43.031317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.790 qpair failed and we were unable to recover it. 00:29:56.790 [2024-11-06 14:11:43.031701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.790 [2024-11-06 14:11:43.031729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.790 qpair failed and we were unable to recover it. 00:29:56.790 [2024-11-06 14:11:43.032093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.790 [2024-11-06 14:11:43.032123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.790 qpair failed and we were unable to recover it. 00:29:56.790 [2024-11-06 14:11:43.032484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.790 [2024-11-06 14:11:43.032512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.790 qpair failed and we were unable to recover it. 00:29:56.790 [2024-11-06 14:11:43.032888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.790 [2024-11-06 14:11:43.032920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.790 qpair failed and we were unable to recover it. 00:29:56.790 [2024-11-06 14:11:43.033147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.790 [2024-11-06 14:11:43.033176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:56.790 qpair failed and we were unable to recover it. 00:29:57.062 [2024-11-06 14:11:43.033418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.062 [2024-11-06 14:11:43.033451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-11-06 14:11:43.033794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.062 [2024-11-06 14:11:43.033824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-11-06 14:11:43.034205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.062 [2024-11-06 14:11:43.034233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-11-06 14:11:43.034593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.062 [2024-11-06 14:11:43.034621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-11-06 14:11:43.035033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.062 [2024-11-06 14:11:43.035062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-11-06 14:11:43.035421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.062 [2024-11-06 14:11:43.035449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-11-06 14:11:43.035801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.062 [2024-11-06 14:11:43.035830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-11-06 14:11:43.036182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.062 [2024-11-06 14:11:43.036218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-11-06 14:11:43.036579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.062 [2024-11-06 14:11:43.036606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-11-06 14:11:43.036866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.062 [2024-11-06 14:11:43.036895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.037258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.037285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.037664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.037693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.038057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.038086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.038460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.038489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.038867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.038897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.039234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.039263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.039606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.039633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.039983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.040014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.040373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.040400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.040775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.040806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.041168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.041196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.041457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.041486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.041838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.041867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.042115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.042146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.042511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.042539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.042876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.042906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.043257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.043285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.043646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.043674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.044032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.044061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.044463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.044491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.044839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.044869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.045263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.045290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.045654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.045683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.046116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.046144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.046516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.046546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.046919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.046948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.047313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.047342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.047711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.047739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.048107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.048136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.048486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.048514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.048892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.048922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-11-06 14:11:43.049291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.063 [2024-11-06 14:11:43.049319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.049681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.049708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.050078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.050106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.050333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.050364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.050744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.050781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.051072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.051099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.051464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.051499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.051859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.051889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.052108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.052139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.052525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.052555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.052828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.052858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.053215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.053245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.053497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.053525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.053870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.053899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.054261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.054288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.054654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.054683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.055044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.055072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.055513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.055541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.055776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.055805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.056165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.056192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.056555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.056583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.056956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.056986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.057346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.057374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.057738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.057775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.058127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.058155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.058397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.058428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.058771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.058799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.059163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.059190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.059550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.059578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.059935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.059963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.060320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.060348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.060652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.060679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.061040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.061070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.061404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-11-06 14:11:43.061434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.064 qpair failed and we were unable to recover it. 00:29:57.064 [2024-11-06 14:11:43.061811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.061839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.062194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.062223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.062581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.062610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.062984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.063012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.063364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.063393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.063767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.063797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.064157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.064184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.064616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.064644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.064975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.065012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.065415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.065442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.065788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.065818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.066176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.066203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.066546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.066587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.066924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.066953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.067291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.067328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.067618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.067648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.067995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.068025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.068384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.068412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.068642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.068673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.069030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.069059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.069424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.069452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.069802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.069830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.070191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.070218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.070589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.070616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.070990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.071018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.071380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.071408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.071770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.071800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.072178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.072207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.072568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.072596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.072964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.072993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.073355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.073383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.073766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.073795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.074158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.074186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.074558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-11-06 14:11:43.074587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.065 qpair failed and we were unable to recover it. 00:29:57.065 [2024-11-06 14:11:43.074981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.075009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.075375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.075403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.075772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.075814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.076202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.076230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.076591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.076618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.076989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.077019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.077384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.077411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.077658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.077691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.078085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.078115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.078448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.078477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.078823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.078851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.079213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.079242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.079601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.079628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.079971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.080001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.080260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.080292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.080714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.080741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.080995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.081023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.081279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.081307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.081564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.081598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.081974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.082003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.082242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.082273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.082623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.082652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.083076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.083105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.083363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.083390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.083763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.083793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.084075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.084103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.084478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.084505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.084865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.084895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.085135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.085163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.085505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.085535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.085898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.085927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.086222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.086249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.086396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.086428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.086800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-11-06 14:11:43.086830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.066 qpair failed and we were unable to recover it. 00:29:57.066 [2024-11-06 14:11:43.087257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.087285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.087654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.087681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.088031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.088062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.088417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.088445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.088813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.088842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.089177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.089207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.089553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.089581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.089936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.089967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.090337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.090365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.090721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.090757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.091186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.091213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.091538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.091574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.091929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.091958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.092322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.092350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.092702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.092730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.093106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.093135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.093491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.093519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.093881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.093911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.094269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.094298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.094661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.094689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.095061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.095092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.095448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.095477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.095841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.095871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.096281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.096310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.096669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.096698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.097067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.097097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.097521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.067 [2024-11-06 14:11:43.097550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.067 qpair failed and we were unable to recover it. 00:29:57.067 [2024-11-06 14:11:43.097904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.097934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.098178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.098210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.098590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.098618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.098989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.099019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.099377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.099406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.099778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.099808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.100162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.100191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.100552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.100581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.100819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.100852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.101113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.101141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.101505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.101533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.101934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.101966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.102390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.102418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.102756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.102786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.103141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.103169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.103436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.103463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.103703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.103737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.104156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.104187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.104526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.104555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.104908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.104937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.105307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.105337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.105699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.105727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.106142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.106173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.106522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.106550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.106826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.106862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.107234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.107263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.107627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.107656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.107998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.108029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.108399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.108428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.108789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.108818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.109277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.109307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.109729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.109767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.110019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.068 [2024-11-06 14:11:43.110050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.068 qpair failed and we were unable to recover it. 00:29:57.068 [2024-11-06 14:11:43.110487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.110516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.110860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.110891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.111238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.111267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.111648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.111678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.112051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.112082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.112489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.112518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.112878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.112906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.113272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.113301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.113666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.113695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.114041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.114070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.114425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.114454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.114812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.114842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.115215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.115243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.115610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.115638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.116017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.116047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.116416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.116445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.116810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.116840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.117198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.117228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.117590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.117620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.117988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.118017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.118264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.118297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.118651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.118680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.119016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.119047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.119444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.119474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.119812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.119843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.120202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.120230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.120585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.120612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.120972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.121003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.121427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.121456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.121790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.121822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.122182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.122210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.122575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.122610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.122948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.069 [2024-11-06 14:11:43.122978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.069 qpair failed and we were unable to recover it. 00:29:57.069 [2024-11-06 14:11:43.123348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.123378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.123775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.123805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.124135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.124163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.124524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.124553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.124921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.124951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.125386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.125414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.125779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.125809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.126166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.126194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.126572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.126603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.126827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.126856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.127248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.127277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.127642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.127671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.128039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.128070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.128430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.128459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.128931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.128961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.129207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.129236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.129582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.129609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.129971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.130001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.130367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.130395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.130776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.130808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.131197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.131225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.131568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.131597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.131932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.131964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.132194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.132225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.132587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.132615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.132994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.133024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.133257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.133284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.133553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.133582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.133965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.133995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.134357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.134388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.134768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.134798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.135160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.135190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.135544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.135574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.135966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.135998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.136358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.136387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.070 qpair failed and we were unable to recover it. 00:29:57.070 [2024-11-06 14:11:43.136764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.070 [2024-11-06 14:11:43.136794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.137150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.137178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.137439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.137467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.137817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.137853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.138197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.138226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.138588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.138616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.138977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.139008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.139375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.139403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.139738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.139788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.140188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.140216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.140458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.140489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.140854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.140884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.141194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.141222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.141514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.141543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.141886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.141915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.142285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.142314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.142683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.142712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.143069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.143099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.143464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.143494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.143860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.143890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.144255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.144284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.144646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.144674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.145043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.145072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.145416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.145444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.145812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.145841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.146199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.146226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.146635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.146663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.146989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.147018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.147374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.147401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.147653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.147681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.148050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.148079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.148424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.148452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.148812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.148842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.149200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.149227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.149592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.149621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.071 [2024-11-06 14:11:43.149913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.071 [2024-11-06 14:11:43.149941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.071 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.150306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.150334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.150713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.150741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.151013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.151041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.151390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.151417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.151802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.151833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.152195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.152223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.152597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.152625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.152998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.153033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.153462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.153490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.153848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.153877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.154234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.154263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.154552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.154580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.154872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.154901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.155171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.155199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.155558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.155585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.155740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.155791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.156122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.156150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.156558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.156585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.156902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.156931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.157300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.157328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.157689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.157716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.157940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.157972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.158327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.158355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.158597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.158628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.158994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.159024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.159380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.159408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.159770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.159799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.160157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.160187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.160572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.160599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.160821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.160853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.161258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.161286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.072 [2024-11-06 14:11:43.161534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.072 [2024-11-06 14:11:43.161561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.072 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.161985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.162013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.162268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.162296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.162657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.162686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.162860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.162892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.163271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.163299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.163665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.163694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.163976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.164006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.164319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.164347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.164712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.164741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.165095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.165123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.165504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.165532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.165792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.165821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.166068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.166100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.166337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.166370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.166649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.166677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.167007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.167043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.167401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.167429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.167657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.167686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.168033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.168061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.168423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.168451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.168821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.168850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.169224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.169252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.169505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.169533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.169878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.169908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.170145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.170175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.170541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.170569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.170909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.170939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.171292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.171319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.171698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.171726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.172004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.172033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.172376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.172404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.172767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.172797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.173050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.173077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.173440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.173467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.173843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.173873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.174246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.174274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.073 [2024-11-06 14:11:43.174653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.073 [2024-11-06 14:11:43.174681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.073 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.175026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.175056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.175308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.175335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.175676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.175705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.176069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.176098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.176457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.176485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.176845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.176874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.177240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.177268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.177675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.177702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.178095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.178125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.178485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.178512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.178862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.178892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.179304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.179331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.179573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.179600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.179961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.179990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.180348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.180377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.180758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.180788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.181144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.181171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.181531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.181559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.181926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.181962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.182328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.182356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.182730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.182768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.183170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.183198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.183561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.183588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.183972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.184001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.184368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.184396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.184761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.184792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.185142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.185170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.185540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.185567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.185937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.185967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.186331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.186359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.186672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.186699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.187068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.187097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.187457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.187486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.187738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.187789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.188149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.188177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.188549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.188578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.188951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.188981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.189343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.074 [2024-11-06 14:11:43.189372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.074 qpair failed and we were unable to recover it. 00:29:57.074 [2024-11-06 14:11:43.189777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.189806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.190212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.190242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.190603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.190632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.190975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.191004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.191381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.191408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.191694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.191721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.192140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.192169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.192411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.192439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.192806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.192836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.193220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.193248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.193608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.193635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.193889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.193917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.194161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.194192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.194587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.194614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.194977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.195006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.195384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.195412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.195783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.195813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.196172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.196200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.196560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.196589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.196815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.196847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.197187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.197222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.197599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.197627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.197981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.198011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.198386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.198414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.198772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.198800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.199159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.199187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.199556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.199585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.199938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.199967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.200346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.200374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.200709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.200737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.201108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.201137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.201504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.201532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.201909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.201938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.202300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.202328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.202695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.202724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.203132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.203161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.203528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.203558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.203912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.203942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.075 qpair failed and we were unable to recover it. 00:29:57.075 [2024-11-06 14:11:43.204304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.075 [2024-11-06 14:11:43.204332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.204712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.204739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.204999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.205028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.205365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.205393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.205683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.205711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.206102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.206131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.206488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.206516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.206642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.206673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.206984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.207013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.207381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.207410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.207778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.207808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.208191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.208219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.208574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.208602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.208984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.209013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.209377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.209406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.209765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.209793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.210159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.210186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.210562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.210590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.210954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.210984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.211345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.211373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.211739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.211781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.212168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.212196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.212536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.212570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.212937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.212966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.213333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.213361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.213723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.213758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.214162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.214189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.214551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.214578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.214926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.214955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.215328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.215356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.215733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.215775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.216151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.216178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.216548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.216575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.216939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.216969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.217337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.076 [2024-11-06 14:11:43.217365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.076 qpair failed and we were unable to recover it. 00:29:57.076 [2024-11-06 14:11:43.217735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.217773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.218171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.218201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.218600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.218627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.218992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.219023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.219395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.219422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.219791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.219820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.220185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.220213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.220577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.220604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.220952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.220980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.221342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.221370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.221739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.221776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.222130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.222158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.222517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.222545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.222921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.222950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.223215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.223244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.223622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.223650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.223983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.224013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.224382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.224410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.224782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.224812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.225149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.225177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.225535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.225563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.225928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.225956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.226365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.226393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.226744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.226784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.227169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.227197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.227556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.227584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.227814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.227846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.228209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.228242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.228609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.228637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.229051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.229080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.229485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.229512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.077 [2024-11-06 14:11:43.229872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-06 14:11:43.229900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.077 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.230264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.230292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.230545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.230576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.230796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.230825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.231196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.231223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.231586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.231615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.231995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.232025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.232392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.232421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.232791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.232819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.233184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.233212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.233572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.233601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.233975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.234003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.234256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.234284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.234634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.234662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.235011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.235039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.235270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.235301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.235651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.235680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.236038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.236069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.236427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.236456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.236826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.236854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.237203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.237232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.237570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.237597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.237955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.237985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.238323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.238351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.238721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.238757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.239114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.239144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.239534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.239561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.239923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.239954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.240311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.240339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.240594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.240622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.241003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.241032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.241394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.241422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.241796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.241825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.242196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.242223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.242490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.242517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.242869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.242898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.243126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.243163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.078 [2024-11-06 14:11:43.243420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-06 14:11:43.243453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.078 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.243777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.243808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.244164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.244193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.244560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.244588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.244964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.244993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.245359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.245388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.245758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.245788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.246171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.246200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.246544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.246575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.246925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.246954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.247328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.247355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.247606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.247638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.247970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.248002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.248364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.248393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.248762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.248793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.249143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.249172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.249506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.249534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.249899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.249931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.250289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.250318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.250684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.250714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.251162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.251191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.251523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.251552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.251896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.251927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.252301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.252329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.252695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.252722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.253133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.253162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.253374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.253403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.253762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.253792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.254137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.254165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.254526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.254554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.254915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.254944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.255306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.255334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.255711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.255739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.256178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.256205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.256568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.256596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.256977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.257007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.257352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.257381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.257741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.079 [2024-11-06 14:11:43.257782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.079 qpair failed and we were unable to recover it. 00:29:57.079 [2024-11-06 14:11:43.258130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.258159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.258513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.258547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.258893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.258925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.259283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.259312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.259674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.259703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.260079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.260108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.260469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.260497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.260883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.260914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.261292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.261322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.261703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.261731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.262116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.262145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.262507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.262535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.262897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.262927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.263294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.263322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.263691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.263717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.264116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.264146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.264509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.264537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.264901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.264929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.265302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.265331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.265692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.265721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.266104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.266134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.266520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.266550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.269787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.269863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.270319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.270358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.270765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.270806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.271205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.271240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.271541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.271574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.271953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.271993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.272392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.272430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.272817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.272854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.273262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.273302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.273694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.273728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.274063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.274131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.274534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.274567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.274946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.274979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.275337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.275367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.275641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.275669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.275915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.275945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.080 qpair failed and we were unable to recover it. 00:29:57.080 [2024-11-06 14:11:43.276392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.080 [2024-11-06 14:11:43.276421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.276784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.276815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.277111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.277139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.277494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.277533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.277912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.277944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.278171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.278199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.278578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.278607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.278927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.278957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.279335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.279363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.279739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.279796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.280147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.280177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.280542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.280570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.280850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.280881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.281149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.281178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.281541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.281569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.281825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.281854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.282242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.282270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.282706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.282737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.283202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.283231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.283574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.283603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.283992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.284023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.284392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.284421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.284822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.284854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.285205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.285235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.285590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.285618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.286022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.286052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.286408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.286435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.286806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.286835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.287214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.287243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.287595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.287624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.287978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.288015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.288363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.288392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.288775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.288805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.289178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.289206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.289574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.289608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.289975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.290005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.290364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.290393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.290760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.290790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.081 [2024-11-06 14:11:43.291142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.081 [2024-11-06 14:11:43.291171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.081 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.291538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.291566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.291927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.291957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.292321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.292350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.292718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.292825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.293197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.293227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.293599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.293629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.293994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.294024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.294271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.294300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.294501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.294535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.294795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.294827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.295072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.295103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.295372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.295401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.295763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.295793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.296157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.296187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.296515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.296544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.296894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.296924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.297302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.297330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.297692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.297720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.297993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.298026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.298400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.298429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.298781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.298811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.299175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.299204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.299573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.299604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.299934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.299963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.300345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.300373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.300735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.300791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.301204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.301232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.301587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.301614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.301974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.302005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.302364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.302393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.302741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.302782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.303141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.303176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.303564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.303593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.082 [2024-11-06 14:11:43.303946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.082 [2024-11-06 14:11:43.303976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.082 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.304341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.304369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.304739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.304779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.305040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.305073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.305426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.305455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.305890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.305920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.306289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.306317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.306699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.306727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.307092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.307123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.307371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.307400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.307758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.307788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.308144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.308172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.308536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.308566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.308923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.308954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.309320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.309348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.309719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.309768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.310171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.310199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.310553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.310582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.310956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.310986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.311345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.311373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.311741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.311782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.311996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.312024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.312393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.312421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.312839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.312868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.313184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.313213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.313469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.313497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.313739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.313784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.314145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.314174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.314551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.314580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.314933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.314966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.315337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.315365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.315724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.315763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.316118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.316147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.316502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.316529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.316871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.316902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.317272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.317300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.317668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.317696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.318062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.318092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.083 [2024-11-06 14:11:43.318442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.083 [2024-11-06 14:11:43.318479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.083 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.318826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.318856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.319211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.319241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.319608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.319636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.319984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.320014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.320377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.320404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.320770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.320800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.321078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.321106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.321468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.321495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.321864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.321894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.322234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.322263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.322643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.322672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.323021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.323051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.323416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.323445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.323804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.323836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.324207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.324234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.324609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.324638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.324996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.325027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.325399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.325429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.325665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.325696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.326084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.326114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.326469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.326500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.326892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.326923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.327263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.327298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.327677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.327705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.328062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.328091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.084 [2024-11-06 14:11:43.328460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.084 [2024-11-06 14:11:43.328487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.084 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.328846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.328877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.329242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.329271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.329613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.329641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.329913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.329943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.330294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.330323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.330675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.330704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.331056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.331087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.331446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.331475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.331826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.331860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.332292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.332322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.332678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.332705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.333074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.333104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.333533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.333562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.333914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.333950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.334316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.334344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.334677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.334707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.335074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.335104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.335364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.335392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.335742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.335795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.336191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.336220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.336583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.336611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.336978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.337009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.337372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.337401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.337765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.337795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.338151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.338180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.338557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.338585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.338967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.338997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.339229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.339263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.339664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.355 [2024-11-06 14:11:43.339694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.355 qpair failed and we were unable to recover it. 00:29:57.355 [2024-11-06 14:11:43.340131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.340161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.340517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.340546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.340967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.340997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.341347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.341376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.341614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.341641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.342054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.342083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.342452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.342479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.342824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.342854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.343221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.343250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.343667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.343696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.344056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.344085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.344443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.344473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.344899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.344930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.345289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.345316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.345677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.345706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.346075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.346106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.346467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.346495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.346758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.346792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.347154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.347183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.347547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.347576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.347931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.347960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.348335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.348364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.348734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.348777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.349076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.349106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.349465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.349510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.349877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.349909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.350273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.350302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.350571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.350599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.350972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.351002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.351408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.351439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.351823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.351854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.352216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.352246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.352613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.352644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.352982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.353014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.353372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.353401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.356 [2024-11-06 14:11:43.353654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.356 [2024-11-06 14:11:43.353685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.356 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.354023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.354053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.354406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.354436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.354717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.354762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.355118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.355148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.355512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.355540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.355908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.355939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.356301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.356330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.356694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.356722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.357100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.357129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.357494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.357523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.357881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.357910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.358154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.358181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.358443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.358475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.358831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.358862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.359263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.359291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.359735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.359788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.360169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.360199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.360580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.360610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.360885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.360915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.361167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.361200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.361555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.361582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.361969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.361999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.362321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.362350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.362588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.362620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.362975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.363005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.363359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.363387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.363634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.363664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.364026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.364056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.364419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.364455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.364812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.364842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.365071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.365101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.365463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.357 [2024-11-06 14:11:43.365493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.357 qpair failed and we were unable to recover it. 00:29:57.357 [2024-11-06 14:11:43.365844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.365874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.366241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.366270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.366626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.366655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.367032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.367062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.367396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.367425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.367804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.367834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.368230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.368261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.368633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.368662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.369010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.369045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.369383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.369413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.369771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.369805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.370195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.370224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.370580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.370612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.370988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.371018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.371383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.371412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.371773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.371804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.372191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.372220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.372579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.372610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.372976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.373008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.373366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.373395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.373744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.373786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.374183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.374213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.374568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.374596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.374992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.375022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.375388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.375417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.375788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.375820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.376181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.376217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.376560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.376589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.376928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.376962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.377333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.377362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.377727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.377770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.378115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.378144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.358 qpair failed and we were unable to recover it. 00:29:57.358 [2024-11-06 14:11:43.378496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.358 [2024-11-06 14:11:43.378524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.378888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.378917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.379293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.379322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.379687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.379717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.380200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.380238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.380500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.380528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.380811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.380842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.381137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.381165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.381502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.381531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.381938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.381967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.382344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.382374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.382713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.382742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.383130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.383158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.383521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.383550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.383900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.383931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.384297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.384326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.384699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.384727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.385041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.385070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.385505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.385535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.385904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.385933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.386298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.386327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.386678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.386705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.387071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.387101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.387492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.387520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.387785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.387819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.388227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.388256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.388605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.388633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.389008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.389038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.389369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.389398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.389769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.389800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.390122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.390151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.390525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.390555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.390917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.390947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.391290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.391319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.391662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.391690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.391989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.392020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.359 qpair failed and we were unable to recover it. 00:29:57.359 [2024-11-06 14:11:43.392404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.359 [2024-11-06 14:11:43.392432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.392793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.392824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.393240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.393270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.393624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.393654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.394048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.394077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.394485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.394515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.394887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.394918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.395313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.395342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.395690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.395725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.396103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.396133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.396499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.396528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.396811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.396841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.397208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.397237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.397606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.397636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.397985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.398017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.398382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.398412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.398784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.398815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.399179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.399206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.399566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.399595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.399948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.399979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.400405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.400434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.400704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.400732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.401106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.401136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.401504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.401534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.401901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.401932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.402183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.360 [2024-11-06 14:11:43.402211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.360 qpair failed and we were unable to recover it. 00:29:57.360 [2024-11-06 14:11:43.402650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.402680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.403058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.403095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.403362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.403395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.403673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.403702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.404073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.404103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.404534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.404563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.404927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.404956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.405370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.405398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.405639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.405671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.406042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.406073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.406454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.406482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.406864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.406895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.407263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.407292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.407669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.407697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.408062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.408093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.408463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.408493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.408837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.408867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.409247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.409276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.409696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.409725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.410114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.410142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.410494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.410522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.410939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.410969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.411237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.411272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.411626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.411655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.412022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.412052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.412425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.412453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.412825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.412855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.413159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.413187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.413549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.413577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.413844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.413874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.414239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.414271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.414653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.414682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.415042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.415072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.415434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.415462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.415822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.415851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.416160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.416189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.416523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.361 [2024-11-06 14:11:43.416552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.361 qpair failed and we were unable to recover it. 00:29:57.361 [2024-11-06 14:11:43.416910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.416939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.417244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.417272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.417638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.417667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.418045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.418075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.418337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.418370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.418757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.418788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.419137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.419164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.419537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.419564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.419871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.419900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.420285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.420313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.420676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.420704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.420985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.421015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.421372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.421401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.421779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.421809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.422233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.422261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.422626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.422655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.423007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.423038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.423398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.423426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.423793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.423823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.424192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.424221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.424597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.424626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.424976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.425007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.425405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.425435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.425802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.425833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.426206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.426234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.426611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.426650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.426994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.427023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.427374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.427401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.427782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.427814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.428199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.428228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.428593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.428621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.428872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.428901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.429280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.429308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.429668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.362 [2024-11-06 14:11:43.429697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.362 qpair failed and we were unable to recover it. 00:29:57.362 [2024-11-06 14:11:43.430065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.430094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.430460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.430488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.430894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.430922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.431293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.431321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.431684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.431715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.432095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.432125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.432488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.432518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.432884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.432915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.433184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.433213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.433575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.433603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.433971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.434002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.434247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.434280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.434525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.434557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.434939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.434969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.435331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.435359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.435723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.435761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.436137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.436165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.436536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.436564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.436912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.436944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.437295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.437323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.437683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.437712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.438111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.438141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.438510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.438539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.438906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.438937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.439307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.439336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.439692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.439720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.440089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.440118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.440338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.440366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.440729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.440770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.441130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.441159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.441528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.441558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.363 [2024-11-06 14:11:43.441808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.363 [2024-11-06 14:11:43.441843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.363 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.442199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.442227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.442600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.442628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.442955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.442984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.443342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.443369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.443735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.443774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.444013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.444042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.444393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.444422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.444784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.444813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.445253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.445282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.445503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.445534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.445880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.445910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.446263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.446291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.446655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.446683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.447037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.447066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.447314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.447347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.447674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.447702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.448075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.448105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.448349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.448380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.448765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.448794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.449176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.449205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.449571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.449600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.449973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.450003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.450384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.450413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.450779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.450809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.451161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.451189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.451541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.451570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.451977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.452007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.452189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.452219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.452606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.452636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.452891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.452924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.453193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.453222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.453575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.453603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.453876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.453906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.454286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.454314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.454553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.454585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.454947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.454977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.455337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.364 [2024-11-06 14:11:43.455365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.364 qpair failed and we were unable to recover it. 00:29:57.364 [2024-11-06 14:11:43.455726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.455766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.456131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.456161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.456529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.456564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.456937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.456967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.457317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.457346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.457713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.457740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.458112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.458140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.458502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.458531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.458900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.458931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.459293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.459323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.459666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.459695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.460047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.460077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.460413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.460441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.460693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.460724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.461060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.461089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.461446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.461474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.461861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.461891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.462151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.462181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.462542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.462570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.462932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.462963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.463367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.463395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.463728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.463769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.464124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.464153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.464514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.464543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.464910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.464939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.465295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.465323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.465684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.465712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.466086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.466116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.466459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.466488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.466770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.365 [2024-11-06 14:11:43.466800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.365 qpair failed and we were unable to recover it. 00:29:57.365 [2024-11-06 14:11:43.467188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.467216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.467571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.467599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.467960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.467991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.468133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.468165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.468547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.468576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.468800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.468830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.469233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.469261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.469640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.469668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.469903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.469934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.470295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.470322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.470569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.470601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.470935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.470965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.471323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.471356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.471704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.471733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.472142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.472171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.472513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.472542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.472906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.472938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.473293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.473322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.473687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.473714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.473995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.474024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.474282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.474314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.474664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.474693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.475109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.475139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.475478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.475508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.475845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.475874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.476116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.476146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.476510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.476539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.476937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.476969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.477341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.477370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.477722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.477763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.478139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.478168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.478404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.478435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.478804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.478835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.479210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.479238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.479607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.479636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.480012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.480043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.480483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.480511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.366 qpair failed and we were unable to recover it. 00:29:57.366 [2024-11-06 14:11:43.480871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.366 [2024-11-06 14:11:43.480900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.481274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.481304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.481662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.481693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.482093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.482123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.482473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.482500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.482785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.482819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.483153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.483182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.483521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.483549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.483792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.483825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.484179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.484208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.484574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.484602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.484979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.485009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.485371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.485401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.485774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.485804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.486192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.486220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.486465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.486494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.486884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.486915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.487283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.487311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.487675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.487702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.488113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.488142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.488379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.488411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.488769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.488801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.489144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.489172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.489533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.489561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.489932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.489961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.490296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.490325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.490678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.490708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.491072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.491104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.491369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.491397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.491629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.491662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.492062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.492092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.492455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.492482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.492834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.492865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.493210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.367 [2024-11-06 14:11:43.493238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.367 qpair failed and we were unable to recover it. 00:29:57.367 [2024-11-06 14:11:43.493581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.493610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.494014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.494044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.494416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.494445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.494808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.494839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.495193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.495221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.495595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.495625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.495896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.495926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.496295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.496324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.496682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.496717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.497102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.497131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.497482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.497511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.497778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.497809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.498177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.498205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.498549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.498576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.498944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.498975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.499338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.499366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.499728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.499769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.500116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.500144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.500396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.500427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.500767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.500796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.501170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.501200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.501574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.501603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.501906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.501938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.502328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.502357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.502713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.502740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.503087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.503116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.503474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.503503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.503851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.503880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.504222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.504251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.504626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.504656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.505040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.505069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.505432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.505460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.505826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.505855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.506227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.506257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.506638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.506666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.507075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.368 [2024-11-06 14:11:43.507106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.368 qpair failed and we were unable to recover it. 00:29:57.368 [2024-11-06 14:11:43.507446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.507475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.507817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.507846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.508084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.508114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.508461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.508489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.508862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.508893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.509317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.509345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.509780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.509811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.510100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.510129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.510495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.510524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.510897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.510928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.511295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.511324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.511698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.511726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.512092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.512127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.512485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.512512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.512770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.512800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.513150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.513179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.513544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.513573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.513937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.513966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.514330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.514358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.514766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.514796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.515142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.515176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.515425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.515457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.515710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.515742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.516182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.516212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.516549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.516579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.516824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.516852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.517072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.517103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.517468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.517496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.517926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.517957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.518309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.518339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.518678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.518706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.519076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.519105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.519359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.519387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.519689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.519720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.520086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.520114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.520467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.520496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.520853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.520884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.369 [2024-11-06 14:11:43.521252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.369 [2024-11-06 14:11:43.521280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.369 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.521645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.521673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.522036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.522068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.522403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.522432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.522801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.522832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.523206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.523234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.523599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.523628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.523984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.524013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.524353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.524382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.524765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.524797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.525151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.525180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.525491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.525527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.525908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.525938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.526191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.526218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.526568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.526596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.526996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.527031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.527391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.527419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.527664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.527692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.528058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.528088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.528348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.528376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.528737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.528785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.529136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.529164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.529544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.529572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.529934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.529965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.530330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.530359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.530723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.530762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.531138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.531166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.531526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.531555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.531933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.531964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.532317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.532345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.532650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.532679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.370 [2024-11-06 14:11:43.533029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.370 [2024-11-06 14:11:43.533059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.370 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.533428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.533457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.533811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.533842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.534209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.534237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.534592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.534619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.534891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.534919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.535298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.535326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.535742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.535783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.536127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.536155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.536514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.536541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.536809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.536838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.537246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.537275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.537638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.537667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.538038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.538068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.538441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.538471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.538827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.538856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.539209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.539238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.539525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.539553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.539919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.539948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.540351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.540380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.540743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.540783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.541133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.541161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.541526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.541553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.541818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.541848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.542120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.542158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.542529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.542557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.542826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.542857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.543122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.543150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.543527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.543555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.543898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.543927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.544302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.544330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.544587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.544615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.545078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.545109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.545472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.545502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.545870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.545900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.546271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.546300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.371 [2024-11-06 14:11:43.546756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.371 [2024-11-06 14:11:43.546789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.371 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.547160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.547189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.547560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.547590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.548007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.548038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.548392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.548420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.548792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.548822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.549212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.549241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.549614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.549644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.550047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.550078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.550427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.550456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.550831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.550861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.551100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.551132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.551484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.551513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.551877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.551908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.552275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.552305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.552671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.552700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.552981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.553011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.553381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.553411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.553784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.553816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.554216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.554245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.554601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.554630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.554860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.554892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.555241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.555270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.555632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.555661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.556042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.556073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.556438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.556468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.556823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.556853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.557090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.557121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.557396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.557431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.557802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.557834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.558207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.558236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.558568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.558599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.558956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.558987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.559356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.559385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.559758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.372 [2024-11-06 14:11:43.559788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.372 qpair failed and we were unable to recover it. 00:29:57.372 [2024-11-06 14:11:43.560145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.560175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.560527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.560557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.560898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.560929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.561294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.561324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.561652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.561681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.562042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.562073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.562434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.562463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.562829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.562861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.563238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.563268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.563646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.563674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.564019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.564048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.564449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.564478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.564826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.564857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.565228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.565258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.565589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.565618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.565972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.566003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.566346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.566374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.566625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.566653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.567013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.567043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.567403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.567432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.567680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.567710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.568070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.568101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.568524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.568552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.568897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.568925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.569262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.569290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.569664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.569694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.570134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.570164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.570544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.570572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.570918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.570948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.571308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.571337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.571586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.571614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.571976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.572007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.572371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.572400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.572652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.572687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.573071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.573101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.373 [2024-11-06 14:11:43.573466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-11-06 14:11:43.573495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.373 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.573857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.573887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.574129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.574160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.574512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.574541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.574804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.574833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.575111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.575140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.575489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.575517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.575733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.575775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.576025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.576056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.576418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.576449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.576811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.576841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.577216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.577245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.577521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.577551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.577968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.577998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.578241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.578273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.578680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.578709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.579078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.579109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.579469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.579496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.579865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.579895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.580242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.580269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.580632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.580661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.581007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.581037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.581397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.581426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.581792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.581822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.582179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.582207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.582462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.582494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.582834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.582865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.583218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.583245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.583606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.583636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.583994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.584024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.584384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.584412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.584781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.584809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.374 [2024-11-06 14:11:43.585178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-11-06 14:11:43.585206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.374 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.585570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.585599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.585967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.585998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.586365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.586393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.586768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.586799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.587170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.587198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.587575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.587610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.587894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.587926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.588269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.588299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.588699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.588727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.589148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.589177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.589507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.589535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.589873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.589902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.590268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.590297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.590562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.590594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.590818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.590847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.591206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.591234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.591599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.591627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.591874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.591903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.592144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.592177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.592552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.592582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.592937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.592967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.593325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.593353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.593725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.593767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.594123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.594151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.594512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.594541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.594883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.594913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.595174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.595206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.595645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.595675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.596062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.596093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.596457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.596485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.596852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.596882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.597143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-11-06 14:11:43.597173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.375 qpair failed and we were unable to recover it. 00:29:57.375 [2024-11-06 14:11:43.597539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.597568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.597928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.597959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.598318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.598347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.598530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.598557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.598917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.598947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.599260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.599287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.599644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.599674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.600114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.600145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.600494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.600521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.600768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.600800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.601078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.601106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.601470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.601499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.601866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.601897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.602245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.602281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.602657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.602687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.603133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.603162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.603489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.603518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.603765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.603798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.604162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.604192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.604548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.604577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.604935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.604968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.605213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.605241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.605626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.605656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.606050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.606079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.606439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.606470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.606873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.606905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.607248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.607277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.607663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.607691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.608011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.608040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.608245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.608274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.608524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.608555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.608906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.608937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.609199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.609227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.609584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.609612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.609981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.610011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.610372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.376 [2024-11-06 14:11:43.610402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.376 qpair failed and we were unable to recover it. 00:29:57.376 [2024-11-06 14:11:43.610767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.610799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.611158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.611197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.611629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.611656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.611914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.611943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.612335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.612364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.612706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.612734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.613177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.613207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.613565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.613595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.615546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.615614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.616070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.616106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.616472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.616502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.616874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.616905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.617276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.617307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.617681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.617710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.618052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.618084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.618426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.618454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.618828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.618858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.619085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.619122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.619369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.619401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.619767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.619797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.620143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.620172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.620440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.620468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.620827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.620859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.621222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.621252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.621591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.621619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.621968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.622006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.622364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.622392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.622732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.622775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.377 [2024-11-06 14:11:43.623126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.377 [2024-11-06 14:11:43.623156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.377 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-06 14:11:43.623517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.647 [2024-11-06 14:11:43.623550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-06 14:11:43.623914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.647 [2024-11-06 14:11:43.623944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-06 14:11:43.624311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.647 [2024-11-06 14:11:43.624343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-06 14:11:43.624582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.647 [2024-11-06 14:11:43.624616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-06 14:11:43.625001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.647 [2024-11-06 14:11:43.625031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-06 14:11:43.625271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.647 [2024-11-06 14:11:43.625304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-06 14:11:43.625634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.647 [2024-11-06 14:11:43.625664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-06 14:11:43.625808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.647 [2024-11-06 14:11:43.625838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-06 14:11:43.626227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.647 [2024-11-06 14:11:43.626256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-06 14:11:43.626714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.647 [2024-11-06 14:11:43.626743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-06 14:11:43.627131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.647 [2024-11-06 14:11:43.627161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-06 14:11:43.627537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.647 [2024-11-06 14:11:43.627567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-06 14:11:43.627807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.647 [2024-11-06 14:11:43.627846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-06 14:11:43.628212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.647 [2024-11-06 14:11:43.628239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-06 14:11:43.628599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.647 [2024-11-06 14:11:43.628627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-06 14:11:43.628893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.647 [2024-11-06 14:11:43.628924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-06 14:11:43.629166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.647 [2024-11-06 14:11:43.629196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.629546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.629577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.629937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.629967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.630321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.630350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.630696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.630724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.631150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.631185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.631489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.631518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.631917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.631948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.632318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.632350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.632721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.632761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.633111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.633142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.633529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.633557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.633913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.633949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.634394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.634424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.634778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.634813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.635183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.635213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.635571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.635600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.635977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.636008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.636380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.636410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.636776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.636809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.637197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.637226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.637636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.637664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.638091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.638122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.638480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.638509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.638863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.638895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.639246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.639276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.639591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.639619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.639784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.639818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.640179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.640209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.640582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.640612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.641049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.641080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.641421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.641452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.641809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.648 [2024-11-06 14:11:43.641839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-06 14:11:43.642186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.642215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.642567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.642595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.642882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.642912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.643277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.643307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.643669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.643699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.645618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.645676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.646073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.646108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.646473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.646504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.646773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.646808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.647165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.647196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.647541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.647572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.647931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.647963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.648331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.648361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.648724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.648769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.648997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.649030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.649289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.649317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.649583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.649612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.650004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.650033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.650379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.650407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.650769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.650808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.651221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.651250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.651619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.651650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.652016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.652048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.652411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.652441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.652711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.652743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.653109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.653138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.653500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.653530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.653890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.653920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.654282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.654310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.654684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.654713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.655096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.655128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.655494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.655524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.655884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.655917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.656281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.656311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.656681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.656710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.657084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.657115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.657492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.657521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-06 14:11:43.657886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.649 [2024-11-06 14:11:43.657917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.658281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.658309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.658671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.658699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.659070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.659099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.659459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.659487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.659740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.659787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.660136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.660166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.660542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.660570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.660935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.660965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.661335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.661364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.661686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.661715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.662076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.662106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.662460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.662491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.662821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.662850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.663234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.663263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.663636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.663666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.664012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.664042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.664380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.664412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.664772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.664804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.665171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.665200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.665558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.665587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.665979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.666008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.666382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.666416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.666533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.666563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.666968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.666999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.667372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.667401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.667766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.667796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.668198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.668226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.668621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.668650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.668985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.669016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.669402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.669432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.669810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.669841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.670179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.670208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.670568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.670597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.670940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.670969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.671342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.671373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.671763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.650 [2024-11-06 14:11:43.671795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.650 qpair failed and we were unable to recover it. 00:29:57.650 [2024-11-06 14:11:43.672173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.672203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.672556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.672583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.672951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.672982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.673287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.673315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.673571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.673607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.673886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.673916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.674157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.674185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.674568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.674596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.674941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.674970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.675326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.675353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.675698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.675730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.676101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.676132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.676493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.676522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.676886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.676916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.677275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.677303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.677645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.677674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.678025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.678055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.678416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.678445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.678803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.678833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.679211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.679240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.679579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.679607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.679978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.680008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.680246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.680279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.680568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.680599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.680935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.680965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.681195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.681227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.681654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.681683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.682019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.682050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.682405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.682433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.682796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.682828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.683166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.683194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.683559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.683587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.683940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.683969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.684250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.684278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.684627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.684656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.651 [2024-11-06 14:11:43.685006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.651 [2024-11-06 14:11:43.685038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.651 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.685393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.685422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.685774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.685804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.686069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.686097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.686481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.686511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.686881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.686912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.687313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.687342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.687695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.687723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.688025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.688055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.688456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.688484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.688856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.688887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.689253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.689283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.689649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.689678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.690051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.690080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.690431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.690461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.690800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.690829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.691068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.691100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.691454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.691490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.691824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.691853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.692214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.692243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.692477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.692507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.692742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.692807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.693179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.693207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.693547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.693576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.693933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.693965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.694298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.694327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.694730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.694769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.695180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.695208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.695575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.695602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.695877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.652 [2024-11-06 14:11:43.695906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.652 qpair failed and we were unable to recover it. 00:29:57.652 [2024-11-06 14:11:43.696256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.696287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.696618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.696647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.697000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.697031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.697381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.697409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.697779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.697810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.698193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.698221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.698601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.698632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.699050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.699080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.699437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.699465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.699824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.699875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.700138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.700170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.700545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.700576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.701023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.701053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.701412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.701440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.701693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.701720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.701999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.702029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.702324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.702352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.702714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.702743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.703123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.703152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.703512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.703540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.703962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.703992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.704362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.704390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.704830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.704861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.705108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.705137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.705529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.705558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.705916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.705945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.706313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.706340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.706710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.706755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.707106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.707133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.707519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.707548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.707818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.707848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.708183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.708213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.708566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.708594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.708737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.708781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.709133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.709163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.709516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.709545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.653 [2024-11-06 14:11:43.709837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.653 [2024-11-06 14:11:43.709866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.653 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.710219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.710248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.710611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.710639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.710913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.710942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.711309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.711337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.711704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.711735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.712081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.712110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.712550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.712580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.712938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.712968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.713327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.713356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.713717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.713744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.714130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.714158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.714508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.714537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.714902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.714931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.715310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.715338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.715703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.715730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.716113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.716141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.716501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.716530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.716884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.716913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.717165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.717193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.717554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.717583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.717976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.718006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.718391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.718419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.718791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.718822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.719209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.719237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.719607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.719634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.720000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.720028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.720404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.720432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.720803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.720833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.721236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.721264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.721635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.721663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.721908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.721948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.722357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.722386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.722738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.722781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.723190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.723220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.723577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.723605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.723976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.724006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.654 qpair failed and we were unable to recover it. 00:29:57.654 [2024-11-06 14:11:43.724347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.654 [2024-11-06 14:11:43.724376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.724591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.724622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.724885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.724916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.725232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.725260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.725627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.725657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.726026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.726056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.726396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.726425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.726794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.726824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.727208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.727236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.727600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.727630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.728085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.728116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.728473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.728501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.728762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.728792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.729143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.729172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.729560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.729588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.729953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.729984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.730368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.730398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.730781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.730811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.731184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.731211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.731423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.731456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.731816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.731846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.732235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.732263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.732492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.732524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.732930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.732960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.733321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.733349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.733704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.733734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.734086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.734115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.734518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.734547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.734895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.734925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.735242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.735272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.735696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.735724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.736002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.736035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.736289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.736318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.736626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.736657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.655 qpair failed and we were unable to recover it. 00:29:57.655 [2024-11-06 14:11:43.736874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.655 [2024-11-06 14:11:43.736913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.737290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.737318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.737682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.737710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.738083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.738113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.738352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.738381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.738730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.738773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.739160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.739188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.739559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.739587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.739937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.739968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.740335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.740363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.740729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.740774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.741159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.741188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.741550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.741578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.741945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.741975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.742349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.742379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.742686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.742715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.743075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.743105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.743469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.743499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.743852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.743883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.744255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.744285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.744651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.744679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.745044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.745074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.745438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.745465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.745826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.745856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.746070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.746101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.746500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.746530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.746932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.746962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.747363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.747392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.747769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.747801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.748057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.748085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.748465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.748492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.748847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.748877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.749241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.749271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.749520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.749552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.749709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.749740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.750186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.750214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.750522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.750550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.750994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.656 [2024-11-06 14:11:43.751024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.656 qpair failed and we were unable to recover it. 00:29:57.656 [2024-11-06 14:11:43.751385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.751413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.751773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.751802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.752183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.752218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.752585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.752614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.752965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.752994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.753352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.753380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.753744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.753785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.754138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.754166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.754535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.754564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.754926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.754956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.755211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.755240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.755598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.755627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.755915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.755945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.756295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.756323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.756682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.756710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.756967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.757001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.757350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.757381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.757679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.757708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.758068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.758098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.758436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.758464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.758829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.758860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.759224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.759255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.759588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.759617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.759981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.760011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.760381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.760410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.760781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.760811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.761186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.761215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.761654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.761683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.762040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.762070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.762436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.762466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.762801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.657 [2024-11-06 14:11:43.762832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.657 qpair failed and we were unable to recover it. 00:29:57.657 [2024-11-06 14:11:43.763197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.763224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.763650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.763678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.764052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.764081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.764444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.764472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.764847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.764877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.765245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.765273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.765633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.765661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.765914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.765944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.766302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.766330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.766682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.766710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.767080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.767111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.767479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.767513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.767867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.767898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.768268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.768297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.768730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.768780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.769155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.769183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.769541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.769569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.769975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.770004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.770373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.770402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.770772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.770801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.771182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.771210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.771563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.771591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.771965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.771997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.772344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.772374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.772635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.772666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.773063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.773093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.773361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.773389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.773637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.773671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.774050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.774079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.774439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.774467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.774822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.774852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.775298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.775326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.775768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.775797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.776166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.776194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.776547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.776576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.658 qpair failed and we were unable to recover it. 00:29:57.658 [2024-11-06 14:11:43.776938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.658 [2024-11-06 14:11:43.776970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.777339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.777367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.777737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.777780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.778143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.778173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.778543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.778573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.778808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.778842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.779107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.779136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.779492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.779522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.779863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.779894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.780257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.780285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.780631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.780660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.780889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.780921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.781287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.781315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.781675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.781705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.782052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.782081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.782455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.782483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.782759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.782796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.783182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.783210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.783576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.783604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.783945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.783975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.784342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.784370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.784730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.784782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.785000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.785031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.785414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.785442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.785804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.785836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.786127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.786155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.786522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.786552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.786904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.786933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.787297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.787325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.787688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.787716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.788121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.788151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.788511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.788542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.788919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.788949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.789306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.789335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.789716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.789758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.659 qpair failed and we were unable to recover it. 00:29:57.659 [2024-11-06 14:11:43.790157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.659 [2024-11-06 14:11:43.790185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.790516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.790545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.790905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.790936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.791299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.791327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.791684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.791711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.792069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.792098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.792461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.792490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.792848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.792880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.793262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.793292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.793632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.793660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.793930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.793959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.794323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.794351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.794710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.794739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.795140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.795170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.795534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.795564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.795905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.795934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.796320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.796350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.796716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.796755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.797091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.797120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.797500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.797530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.797876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.797907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.798081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.798119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.798512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.798541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.798800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.798828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.799201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.799229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.799477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.799504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.799836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.799866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.800315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.800344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.800704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.800734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.801194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.801223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.801448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.801478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.801855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.801884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.802269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.802296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.802660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.802689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.803062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.803091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.803419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.803447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.803811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.803857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.660 qpair failed and we were unable to recover it. 00:29:57.660 [2024-11-06 14:11:43.804219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-11-06 14:11:43.804248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.804518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.804548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.804917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.804946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.805316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.805346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.805771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.805801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.806142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.806172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.806511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.806540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.806826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.806856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.807230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.807259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.807618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.807649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.807985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.808016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.808367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.808395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.808760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.808791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.809150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.809177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.809432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.809459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.809711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.809767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.810119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.810149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.810527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.810555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.810925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.810955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.811315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.811344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.811710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.811741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.812120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.812148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.812520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.812548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.812893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.812923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.813273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.813311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.813663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.813693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.814082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.814114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.814480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.814510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.814866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.814896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.815248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.815277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.815608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.815636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.815977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.816008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.816372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.816401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.816769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.816802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.817207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-11-06 14:11:43.817235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.661 qpair failed and we were unable to recover it. 00:29:57.661 [2024-11-06 14:11:43.817606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.817634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.818004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.818033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.818428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.818458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.818826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.818857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.819085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.819114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.819370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.819403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.819659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.819686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.820083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.820113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.820471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.820499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.820716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.820743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.821106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.821135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.821507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.821537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.821888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.821920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.822158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.822186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.822565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.822594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.822863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.822892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.823257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.823287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.823643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.823673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.824053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.824083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.824447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.824475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.824868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.824897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.825230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.825258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.825637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.825665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.826001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.826030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.826400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.826430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.826808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.826837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.827192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.827221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.827594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.827622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.827898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.827926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.828288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.828322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.828675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.828706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.829026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.829056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.829420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.829448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.829808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.829839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.662 [2024-11-06 14:11:43.830203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-11-06 14:11:43.830231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.662 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.830602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.830633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2600718 Killed "${NVMF_APP[@]}" "$@" 00:29:57.663 [2024-11-06 14:11:43.830986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.831017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.831407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.831438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.831804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.831837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 14:11:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:57.663 [2024-11-06 14:11:43.832188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.832218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 14:11:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:57.663 [2024-11-06 14:11:43.832583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.832613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.832866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.832906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 14:11:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:57.663 [2024-11-06 14:11:43.833154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.833187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 14:11:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:57.663 [2024-11-06 14:11:43.833543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.833573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 14:11:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:57.663 [2024-11-06 14:11:43.833928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.833959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.834310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.834339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.834574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.834607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.834984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.835014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.835368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.835397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.835736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.835777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.836094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.836123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.836513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.836541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.836897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.836926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.837365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.837401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.837739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.837796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.838194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.838222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.838582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.838612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.838996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.839027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.839394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.839424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.839794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.839824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.840088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.840120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 14:11:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2601631 00:29:57.663 14:11:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2601631 00:29:57.663 14:11:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:57.663 14:11:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2601631 ']' 00:29:57.663 14:11:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.663 14:11:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:57.663 14:11:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.663 14:11:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:57.663 14:11:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:57.663 [2024-11-06 14:11:43.844186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.844290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.844782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.844822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.845234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.845264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.845544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.845582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.663 [2024-11-06 14:11:43.845932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.663 [2024-11-06 14:11:43.845965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.663 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.846120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.846153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.846532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.846563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.846931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.846967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.847324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.847354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.847714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.847759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.848147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.848179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.848556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.848585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.848826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.848857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.849214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.849245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.849605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.849642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.849973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.850003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.850251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.850281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.850638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.850669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.851016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.851049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.851319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.851350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.851774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.851805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.852211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.852241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.852479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.852509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.852782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.852817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.853233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.853264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.853525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.853556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.853968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.853999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.854279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.854310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.854682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.854713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.855177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.855210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.855569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.855599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.855939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.855970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.856177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.856207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.856576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.856606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.856896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.856925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.857184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.857215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.857571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.857602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.857832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.857865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.858122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.858152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.858534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.858565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.858833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.858862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.664 [2024-11-06 14:11:43.859247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.664 [2024-11-06 14:11:43.859276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.664 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.859651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.859680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.859925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.859956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.860322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.860352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.860708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.860739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.861190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.861220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.861604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.861633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.861995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.862027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.862378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.862408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.862786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.862816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.863170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.863201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.863568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.863597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.863972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.864006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.864354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.864390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.864741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.864791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.865191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.865221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.865582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.865612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.866009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.866039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.866398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.866427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.866812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.866842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.867216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.867247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.867489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.867518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.867689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.867717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.868016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.868048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.868449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.868480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.868833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.868864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.869119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.869149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.869423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.869459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.869763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.869802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.870199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.870230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.870475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.870505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.870786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.870818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.871194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.871224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.871344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.871375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.871724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.871766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.872184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.872213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.665 [2024-11-06 14:11:43.872560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.665 [2024-11-06 14:11:43.872589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.665 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.872942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.872973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.873314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.873343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.873722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.873764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.874225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.874257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.874627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.874658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.874926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.874957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.875315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.875343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.875693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.875722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.876116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.876147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.876402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.876431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.876812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.876845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.877284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.877312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.877676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.877706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.878057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.878088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.878348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.878377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.878781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.878812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.879275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.879311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.879670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.879700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.880162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.880193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.880572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.880603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.880979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.881010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.881400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.881429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.881811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.881841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.882233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.882261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.882640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.882668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.882923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.882951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.883343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.883371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.883739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.883788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.884235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.884264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.884636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.884665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.885009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.885042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.885406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.666 [2024-11-06 14:11:43.885436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.666 qpair failed and we were unable to recover it. 00:29:57.666 [2024-11-06 14:11:43.885687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.885719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.886089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.886119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.886492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.886521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.886977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.887007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.887379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.887409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.887664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.887695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.888087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.888118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.888481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.888509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.888759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.888792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.889169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.889200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.889494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.889525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.889793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.889825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.890194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.890224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.890609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.890639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.891053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.891084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.891348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.891377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.891633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.891662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.892029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.892059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.892286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.892317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.892571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.892601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.892972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.893003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.893248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.893278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.893512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.893542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.893808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.893838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.894234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.894269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.894639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.894669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.895043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.895075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.895411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.895440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.895807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.895838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.896203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.896233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.896602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.896631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.897013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.897043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.897411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.897441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.897825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.897858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.898241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.898270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.898512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.667 [2024-11-06 14:11:43.898541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.667 qpair failed and we were unable to recover it. 00:29:57.667 [2024-11-06 14:11:43.898839] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:29:57.668 [2024-11-06 14:11:43.898900] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.668 [2024-11-06 14:11:43.898903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.898933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.899195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.899223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.899595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.899623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.900077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.900108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.900479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.900507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.900840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.900871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.901236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.901265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.901635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.901664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.902028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.902061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.902415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.902446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.902814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.902847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.903256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.903287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.903649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.903678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.904125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.904157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.904517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.904547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.904812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.904843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.905104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.905133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.905494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.905525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.905908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.905938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.906320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.906350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.906669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.906699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.906931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.906963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.907352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.907383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.907615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.907645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.908067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.908097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.908460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.908488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.908825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.908856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.909247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.909285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.909506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.909539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.909802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.909834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.910204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.910234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.910468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.910497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.910872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.910903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.911265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.911295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.911655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.911687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.911933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.911966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-06 14:11:43.912361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.668 [2024-11-06 14:11:43.912391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-06 14:11:43.912791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.669 [2024-11-06 14:11:43.912823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-06 14:11:43.913187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.669 [2024-11-06 14:11:43.913216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-06 14:11:43.913584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.669 [2024-11-06 14:11:43.913614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-06 14:11:43.914013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.669 [2024-11-06 14:11:43.914044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-06 14:11:43.914418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.669 [2024-11-06 14:11:43.914448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-06 14:11:43.914871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.669 [2024-11-06 14:11:43.914901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-06 14:11:43.915266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.669 [2024-11-06 14:11:43.915295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-06 14:11:43.915536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.669 [2024-11-06 14:11:43.915566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-06 14:11:43.915926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.669 [2024-11-06 14:11:43.915957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.916344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.916375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.916742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.916797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.917151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.917179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.917423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.917451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.917818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.917857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.918211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.918239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.918582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.918610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.919022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.919051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.919291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.919319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.919663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.919692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.919969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.919998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.920364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.920393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.920711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.920739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.921167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.921196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.921558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.921586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.921980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.922010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.922399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.922426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.922788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.922817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.923191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.923222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.938 [2024-11-06 14:11:43.923458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.938 [2024-11-06 14:11:43.923487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.938 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.923872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.923901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.924285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.924320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.924691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.924718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.924977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.925006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.925388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.925419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.925652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.925680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.926060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.926091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.926384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.926412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.926640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.926669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.927042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.927071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.927423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.927452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.927684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.927716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.927977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.928006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.928424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.928451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.928816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.928847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.929225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.929253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.929468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.929496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.929765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.929795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.930201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.930231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.930463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.930491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.930874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.930904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.931280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.931310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.931690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.931719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.932067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.932095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.932468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.932499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.932783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.932814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.933183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.933212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.933597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.933625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.933926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.933956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.934310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.934338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.934706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.934734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.935119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.935148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.935533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.935562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.935905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.935936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.936309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.936338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.936733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.936787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.937190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.937219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.939 qpair failed and we were unable to recover it. 00:29:57.939 [2024-11-06 14:11:43.937460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.939 [2024-11-06 14:11:43.937488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.937893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.937924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.938184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.938213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.938635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.938664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.939035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.939071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.939411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.939440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.939822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.939854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.940244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.940273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.940537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.940565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.940960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.940990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.941297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.941326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.941602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.941631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.941994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.942025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.942480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.942509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.942756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.942787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.943172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.943201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.943607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.943635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.943893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.943924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.944364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.944393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.944762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.944794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.945172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.945200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.945619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.945647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.946003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.946032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.946421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.946451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.946844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.946873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.947239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.947267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.947524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.947554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.947739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.947782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.948029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.948057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.948317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.948345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.948705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.948735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.949116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.949148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.949277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.949304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.949549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.949578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.949832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.949863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.950102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.950130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.950514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.940 [2024-11-06 14:11:43.950544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.940 qpair failed and we were unable to recover it. 00:29:57.940 [2024-11-06 14:11:43.950912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.950942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.951294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.951324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.951705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.951735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.952151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.952180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.952652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.952682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.953053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.953083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.953438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.953466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.953711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.953765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.954139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.954169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.954550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.954577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.954978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.955009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.955398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.955427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.955793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.955823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.956177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.956206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.956564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.956592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.956981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.957010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.957380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.957407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.957676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.957705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.958101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.958131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.958496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.958526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.958787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.958818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.959090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.959121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.959369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.959401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.959723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.959762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.960125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.960154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.960524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.960553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.960929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.960960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.961335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.961362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.961575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.961602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.961947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.961978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.962335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.962364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.962715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.962757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.963061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.963097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.963484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.963512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.963776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.963806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.964165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.964194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.964647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.964674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.965023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.941 [2024-11-06 14:11:43.965054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.941 qpair failed and we were unable to recover it. 00:29:57.941 [2024-11-06 14:11:43.965293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.965323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.965675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.965703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.965950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.965979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.966357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.966384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.966601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.966629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.966968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.966997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.967381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.967410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.967805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.967836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.968075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.968104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.968488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.968522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.968879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.968910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.969263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.969291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.969659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.969689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.970053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.970083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.970457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.970485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.970867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.970897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.971263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.971291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.971687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.971715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.972163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.972194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.972548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.972575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.972970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.973001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.973406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.973435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.973790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.973818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.974209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.974237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.974614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.974643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.975011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.975043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.975295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.975324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.975694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.975723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.976113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.976143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.976504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.976533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.976908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.976940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.977314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.977343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.977726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.977765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.978141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.978169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.978541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.978569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.978915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.978944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.942 [2024-11-06 14:11:43.979198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.942 [2024-11-06 14:11:43.979227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.942 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.979579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.979609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.979970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.980001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.980369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.980399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.980622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.980653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.980901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.980931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.981182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.981213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.981491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.981521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.981878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.981908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.982080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.982108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.982496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.982525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.982944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.982973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.983336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.983365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.983710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.983755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.984109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.984138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.984477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.984505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.984889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.984920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.985283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.985311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.985687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.985716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.986107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.986137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.986282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.986308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.986547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.986575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.986931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.986961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.987329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.987357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.987614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.987641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.988019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.988049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.988390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.988421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.988779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.988810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.989197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.989225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.989600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.989628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.990008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.990037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.990443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.990471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.990915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.990945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.991309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.991336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.943 [2024-11-06 14:11:43.991764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.943 [2024-11-06 14:11:43.991794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.943 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.992180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.992208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.992591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.992620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.992976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.993006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.993252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.993281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.993659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.993687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.994065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.994096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.994468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.994496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.994844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.994875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.995224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.995252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.995557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.995585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.995983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.996013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.996393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.996421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.996650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.996680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.997047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.997078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.997434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.997463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.997844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.997874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.998212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.998240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.998586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.998614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.998978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.999014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.999226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.999255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.999497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.999524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:43.999952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:43.999983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:44.000324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:44.000353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:44.000721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:44.000764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:44.001140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:44.001169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:44.001412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:44.001444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:44.001844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:44.001877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:44.002107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:44.002134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:44.002386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:44.002414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:44.002788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:44.002818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:44.003089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:44.003116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:44.003480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:44.003507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:44.003877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:44.003907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:44.004287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:44.004316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:44.004552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:44.004580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:44.004860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:57.944 [2024-11-06 14:11:44.004972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:44.005001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:44.005381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:44.005409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.944 [2024-11-06 14:11:44.005858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.944 [2024-11-06 14:11:44.005888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.944 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.006267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.006295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.006537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.006565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.006979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.007009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.007262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.007290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.007562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.007591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.007851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.007884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.008242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.008270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.008622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.008651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.009026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.009058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.009425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.009454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.009903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.009934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.010301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.010329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.010702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.010731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.011148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.011180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.011558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.011586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.012101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.012132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.012392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.012419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.012811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.012841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.013101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.013129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.013459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.013487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.013887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.013926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.014226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.014256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.014384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.014413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.014703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.014733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.014855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.014884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.015262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.015291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.015486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.015518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.015770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.015799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.016073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.016102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.016458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.016491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.016893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.016924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.017295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.017323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.017691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.017719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.017997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.018028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.018390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.018429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.018782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.018813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.945 [2024-11-06 14:11:44.019174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.945 [2024-11-06 14:11:44.019203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.945 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.019432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.019460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.019772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.019801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.020162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.020191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.020488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.020516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.020631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.020662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.020993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.021024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.021403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.021431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.021717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.021756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.022127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.022157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.022538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.022567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.022931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.022962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.023221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.023249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.023477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.023506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.023865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.023896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.024282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.024311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.024687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.024718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.025109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.025139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.025505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.025540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.025923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.025955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.026327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.026356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.026771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.026802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.027145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.027173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.027414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.027443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.027798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.027836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.028099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.028129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.028495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.028528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.028874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.028903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.029273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.029302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.029667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.029697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.030089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.030118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.030480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.030509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.030881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.030912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.031276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.031304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.031676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.031706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.032119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.032150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.032510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.032539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.032838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.032868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.946 qpair failed and we were unable to recover it. 00:29:57.946 [2024-11-06 14:11:44.033240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.946 [2024-11-06 14:11:44.033270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.033635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.033666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.034021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.034051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.034414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.034443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.034807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.034836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.035208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.035239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.035598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.035627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.036001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.036032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.036397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.036425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.036795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.036825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.037195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.037223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.037589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.037619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.037983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.038014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.038360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.038388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.038756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.038787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.039155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.039184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.039422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.039454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.039702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.039732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.040102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.040132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.040474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.040503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.040788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.040820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.041212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.041241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.041595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.041625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.041978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.042008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.042295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.042325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.042694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.042724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.043039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.043076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.043306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.043339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.043729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.043771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.044104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.044134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.044500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.044530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.044781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.044812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.045175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.947 [2024-11-06 14:11:44.045204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.947 qpair failed and we were unable to recover it. 00:29:57.947 [2024-11-06 14:11:44.045570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.045600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.045968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.045998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.046362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.046392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.046767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.046799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.047159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.047189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.047564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.047594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.047967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.047999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.048354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.048384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.048735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.048797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.049047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.049077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.049427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.049457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.049819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.049850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.050235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.050265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.050511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.050544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.050905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.050936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.051206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.051236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.051592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.051623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.051864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.051896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.052265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.052295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.052727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.052768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.053176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.053206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.053463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.053492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.053871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.053904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.054255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.054285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.054631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.054661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.054936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.054965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.055337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.055366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.055724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.055764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.056147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.056177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.056543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.056574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.057001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.057043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.057388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.057418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.057687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.057716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.058112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.058150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.058458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.948 [2024-11-06 14:11:44.058506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events[2024-11-06 14:11:44.058500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 at runtime. 00:29:57.948 [2024-11-06 14:11:44.058522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.948 [2024-11-06 14:11:44.058530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.948 [2024-11-06 14:11:44.058530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b9[2024-11-06 14:11:44.058539] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.948 0 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.058895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.948 [2024-11-06 14:11:44.058926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.948 qpair failed and we were unable to recover it. 00:29:57.948 [2024-11-06 14:11:44.059301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.059331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.059710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.059740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.060090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.060119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.060355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.060384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.060636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.060664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.060880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:57.949 [2024-11-06 14:11:44.061014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.061044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.061060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:57.949 [2024-11-06 14:11:44.061216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:57.949 [2024-11-06 14:11:44.061215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:57.949 [2024-11-06 14:11:44.061440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.061470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.061815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.061852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.062223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.062253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.062680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.062708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.063137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.063167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.063474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.063504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.063770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.063805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.064062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.064090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.064468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.064496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.064873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.064904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.065244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.065272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.065651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.065680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.065898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.065927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.066329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.066358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.066498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.066526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.066813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.066844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.067002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.067029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.067385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.067413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.067773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.067804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.068119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.068147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.068276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.068307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.068695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.068724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.069036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.069065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.069430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.069459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.069835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.069866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.070237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.070265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.949 qpair failed and we were unable to recover it. 00:29:57.949 [2024-11-06 14:11:44.070677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.949 [2024-11-06 14:11:44.070707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.071100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.071130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.071351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.071379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.071760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.071791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.072045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.072077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.072459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.072489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.072825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.072855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.073215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.073245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.073601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.073629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.073976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.074007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.074250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.074277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.074630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.074661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.075029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.075060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.075434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.075461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.075828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.075857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.076199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.076234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.076654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.076684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.077061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.077090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.077461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.077489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.077850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.077881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.078221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.078249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.078502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.078531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.078770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.078800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.079171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.079200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.079421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.079450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.079661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.079690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.079942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.079972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.080332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.080370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.080602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.080635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.080896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.080926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.081155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.081183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.081562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.081589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.081844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.081876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.082099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.082127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.082372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.082400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.082738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.082790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.083010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.083039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.083280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.083308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.950 [2024-11-06 14:11:44.083478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-11-06 14:11:44.083505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.950 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.083769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.083801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.084186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.084217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.084554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.084582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.084954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.084986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.085334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.085363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.085759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.085790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.086148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.086179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.086406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.086436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.086801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.086832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.087088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.087116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.087480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.087507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.087739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.087780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.088019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.088047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.088486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.088515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.088920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.088952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.089186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.089214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.089617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.089658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.090008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.090039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.090290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.090318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.090567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.090598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.090849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.090880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.091249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.091278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.091529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.091557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.091928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.091957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.092329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.092358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.092730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.092771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.093214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.093244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.093600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.093628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.093870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.093901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.094155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.094187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.094553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.094583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.094964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.094996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.095341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.095370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.095620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.095651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.096009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.096040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.096392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.096422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.096592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.096626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.097008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-11-06 14:11:44.097039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.951 qpair failed and we were unable to recover it. 00:29:57.951 [2024-11-06 14:11:44.097397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.097425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.097871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.097903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.098149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.098179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.098297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.098329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.098564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.098598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.098807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.098838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.099273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.099304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.099652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.099682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.100052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.100083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.100450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.100479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.100828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.100858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.101223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.101253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.101619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.101649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.102055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.102086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.102455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.102484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.102819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.102852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.103069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.103099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.103458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.103488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.103736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.103785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.104023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.104053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.104396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.104424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.104792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.104823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.105246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.105276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.105636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.105665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.106036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.106065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.106432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.106462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.106695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.106726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.106954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.106984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.107231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.107259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.107509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.107539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.107917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.107949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.108315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.108344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.108719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.108764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.952 [2024-11-06 14:11:44.109158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-11-06 14:11:44.109187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.952 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.109561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.109591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.110016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.110046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.110431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.110464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.110566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.110594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.110774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.110806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.111052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.111081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.111334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.111363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.111727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.111771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.112114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.112145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.112511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.112541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.112895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.112926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.113296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.113327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.113546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.113575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.113969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.114000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.114354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.114382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.114662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.114692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.115113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.115143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.115531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.115559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.115921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.115953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.116321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.116350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.116602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.116630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.116994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.117025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.117239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.117267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.117534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.117563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.117792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.117823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.118114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.118143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.118455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.118483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.118693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.118721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.118991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.119020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.119225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.119254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.119493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.119521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.119882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.119914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.120345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.120373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.120589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.120618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.120985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.121016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.121400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.121429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.121829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.121859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.122227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-11-06 14:11:44.122255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.953 qpair failed and we were unable to recover it. 00:29:57.953 [2024-11-06 14:11:44.122610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.122639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.122878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.122908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.123280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.123309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.123530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.123558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.123719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.123757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.123970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.123999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.124222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.124250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.124482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.124511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.124767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.124800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.125151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.125179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.125544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.125573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.125874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.125904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.126271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.126300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.126510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.126548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.126796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.126834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.127064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.127097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.127487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.127518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.127889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.127920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.128284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.128314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.128741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.128782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.129039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.129067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.129321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.129350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.129792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.129823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.130197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.130227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.130578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.130607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.130983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.131012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.131286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.131315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.131672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.131702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.132071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.132102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.132323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.132352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.132601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.132629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.132878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.132911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.133132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.133162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.133510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.133539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.133903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.133934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.134301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.954 [2024-11-06 14:11:44.134331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.954 qpair failed and we were unable to recover it. 00:29:57.954 [2024-11-06 14:11:44.134777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.134808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.135169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.135200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.135436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.135466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.135824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.135854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.136229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.136260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.136486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.136517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.136877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.136906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.137280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.137309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.137688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.137716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.138089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.138119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.138491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.138519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.138771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.138805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.139156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.139184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.139557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.139585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.139825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.139856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.140221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.140249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.140589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.140619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.140873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.140909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.141266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.141294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.141663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.141692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.142060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.142091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.142310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.142339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.142716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.142756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.143116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.143145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.143511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.143539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.143899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.143931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.144298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.144328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.144692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.144721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.144836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.144867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.145286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.145315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.145534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.145562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.145964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.145995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.146297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.146324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.146680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.146711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.147080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.147111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.147469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.147499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.147904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.147935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.148047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.955 [2024-11-06 14:11:44.148077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.955 qpair failed and we were unable to recover it. 00:29:57.955 [2024-11-06 14:11:44.148380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.148412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.148771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.148801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.149020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.149048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.149429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.149457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.149706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.149734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.150095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.150124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.150525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.150553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.150897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.150926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.151370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.151397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.151789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.151819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.152161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.152190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.152459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.152487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.152816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.152845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.153224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.153252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.153611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.153640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.154011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.154040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.154399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.154427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.154798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.154827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.155203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.155231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.155605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.155640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.155898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.155931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.156328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.156357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.156615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.156644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.156982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.157012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.157374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.157402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.157653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.157683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.158049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.158078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.158331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.158360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.158716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.158754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.159118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.159148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.159261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.159289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.159540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.159569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.159799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.159828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.956 qpair failed and we were unable to recover it. 00:29:57.956 [2024-11-06 14:11:44.160076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.956 [2024-11-06 14:11:44.160105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.160335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.160365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.160765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.160796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.161138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.161167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.161275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.161307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.161543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.161572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.162007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.162037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.162398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.162425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.162790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.162819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.163110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.163137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.163484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.163511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.163783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.163812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.164196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.164225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.164594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.164623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.165089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.165119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.165569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.165598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.165836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.165867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.166228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.166256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.166484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.166512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.166924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.166955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.167324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.167352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.167728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.167774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.168018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.168046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.168372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.168401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.168618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.168646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.168992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.169022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.169236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.169271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.169489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.169518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.169887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.169917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.170140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.170168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.170546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.170574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.170920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.170957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.171335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.171364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.171725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.171768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.172128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.172156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.172381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.172408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.172769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.172798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.173170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.173198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.957 [2024-11-06 14:11:44.173561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.957 [2024-11-06 14:11:44.173589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.957 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.174011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.174040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.174417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.174445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.174697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.174726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.175120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.175149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.175489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.175519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.175902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.175930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.176170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.176198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.176656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.176684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.176935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.176963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.177347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.177376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.177490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.177522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.177952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.177982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.178207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.178234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.178573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.178601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.178967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.178998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.179358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.179386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.179776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.179807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.180181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.180210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.180580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.180608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.180972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.181001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.181230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.181258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.181628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.181658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.182056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.182086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.182445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.182472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.182703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.182735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.182988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.183017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.183387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.183415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.183661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.183697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.183922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.183952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.184162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.184190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.184604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.184633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.185006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.185036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.185414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.185443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.958 [2024-11-06 14:11:44.185814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.958 [2024-11-06 14:11:44.185844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.958 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.186217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.186246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.186469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.186497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.186913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.186943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.187314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.187342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.187700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.187728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.188098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.188127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.188579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.188608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.188876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.188906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.189152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.189180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.189496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.189524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.189875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.189906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.190283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.190311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.190680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.190709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.190866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.190895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.191278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.191306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.191724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.191772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.191907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.191934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.192341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.192370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.192729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.192768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.193047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.193075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.193423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.193453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.193809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.193839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.194069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.194099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.194453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.194481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.194717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.194754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.194993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.195021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.195391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.195420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.195802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.195832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.196098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.196125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.196481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.196509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.196601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.196627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.196829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.196857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.197131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.197160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.197525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.197560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.197908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.197937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.198159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.198187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.198364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.198392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.198484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.959 [2024-11-06 14:11:44.198510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-06 14:11:44.198711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.198738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.198975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.199003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.199340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.199367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.199738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.199787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.200106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.200133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.200498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.200525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.200880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.200910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.201299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.201327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.201558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.201586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.201854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.201884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.202257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.202286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.202637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.202665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.203040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.203069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.203301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.203329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.203721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.203757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.204050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.204077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.204426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.204455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.204826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.204856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.205231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.205259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.205632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.205661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.206017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.206046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.206249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.206277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.206650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.206680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.206970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.206999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-06 14:11:44.207326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.960 [2024-11-06 14:11:44.207354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:57.960 qpair failed and we were unable to recover it. 00:29:58.235 [2024-11-06 14:11:44.207721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.207771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.208108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.208136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.208504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.208531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.208907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.208936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.209170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.209197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.209437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.209469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.209716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.209758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.210119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.210149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.210276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.210305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.210678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.210706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.211204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.211242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.211452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.211480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.211848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.211879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.212273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.212301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.212663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.212690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.213052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.213082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.213466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.213494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.213827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.213856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.214084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.214112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.214330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.214359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.214704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.214731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.214840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.214868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.215257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.215286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.215536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.215568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.215924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.215955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.216175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.216203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.216596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.216623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.216988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.217018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.217354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.217382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.217741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.217778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.218135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.218163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.218530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.218558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.218796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.218824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.219064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.219093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.219443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.219471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.219836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.219896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.220127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.220159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.236 [2024-11-06 14:11:44.220536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.236 [2024-11-06 14:11:44.220566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.236 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.220778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.220808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.221224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.221251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.221618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.221645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.222019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.222049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.222429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.222456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.222824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.222852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.223240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.223268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.223615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.223644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.224002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.224032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.224288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.224316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.224680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.224708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.224941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.224970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.225307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.225341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.225697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.225726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.226105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.226134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.226398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.226426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.226662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.226691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.227015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.227045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.227405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.227434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.227684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.227716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.227970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.228000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.228369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.228399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.228738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.228781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.229008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.229036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.229257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.229284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.229659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.229688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.229958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.229988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.230349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.230377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.230660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.230689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.231061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.231091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.231314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.231344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.231791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.231821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.232186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.232214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.232584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.232612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.233075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.233104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.233413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.233442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.233735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.237 [2024-11-06 14:11:44.233773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.237 qpair failed and we were unable to recover it. 00:29:58.237 [2024-11-06 14:11:44.234151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.234180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.234397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.234429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.234794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.234826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.235198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.235226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.235439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.235466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.235862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.235892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.236106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.236135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.236510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.236538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.236902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.236931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.237303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.237333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.237738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.237776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.238183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.238211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.238541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.238569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.238973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.239002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.239226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.239254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.239488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.239527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.239890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.239920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.240291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.240319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.240688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.240716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.240935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.240965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.241371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.241399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.241635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.241665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.241922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.241951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.242310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.242338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.242700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.242729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.243008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.243038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.243408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.243437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.243531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.243557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.243967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.243997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.244359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.244388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.244645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.244676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.244928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.244958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.245363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.245391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.245757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.245788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.246180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.246207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.246587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.246615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.246860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.246892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.247275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.238 [2024-11-06 14:11:44.247303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.238 qpair failed and we were unable to recover it. 00:29:58.238 [2024-11-06 14:11:44.247694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.247722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.247950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.247979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.248344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.248372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.248769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.248800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.249051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.249080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.249435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.249464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.249811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.249840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.250219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.250246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.250610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.250638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.251011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.251040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.251286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.251317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.251682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.251710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.252085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.252114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.252481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.252508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.252936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.252967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.253312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.253339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.253552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.253580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.253849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.253885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.254109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.254137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.254411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.254438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.254788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.254817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.255199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.255226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.255366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.255393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.255637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.255666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.255873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.255902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.256292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.256321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.256699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.256726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.257107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.257136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.257512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.257540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.257902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.257932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.258296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.258324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.258690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.258718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.259098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.259128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.259381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.259412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.259508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.259535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.259803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.259832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.260056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.239 [2024-11-06 14:11:44.260084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.239 qpair failed and we were unable to recover it. 00:29:58.239 [2024-11-06 14:11:44.260450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.260478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.260714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.260741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.260983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.261011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.261368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.261395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.261776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.261806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.262202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.262231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.262585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.262612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.262848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.262878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.263228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.263258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.263602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.263629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.263995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.264026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.264282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.264311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.264698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.264726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.264978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.265007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.265295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.265327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.265690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.265718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.265964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.265993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.266339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.266367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.266614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.266641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.266809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.266838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.267083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.267128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.267512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.267540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.267910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.267942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.268305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.268333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.268706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.268734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.269112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.269141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.269508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.269537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.269909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.269940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.270308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.270335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.270560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.270587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.270970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.271000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.240 qpair failed and we were unable to recover it. 00:29:58.240 [2024-11-06 14:11:44.271370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.240 [2024-11-06 14:11:44.271400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.271764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.271793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.272197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.272225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.272604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.272632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.273005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.273033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.273391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.273419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.273672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.273700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.274102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.274131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.274497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.274526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.274765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.274795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.275167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.275195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.275604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.275633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.276018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.276048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.276415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.276442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.276819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.276848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.277225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.277254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.277624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.277653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.278006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.278036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.278156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.278188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.278536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.278565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.278773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.278802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.279104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.279132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.279347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.279375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.279731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.279773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.279993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.280021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.280152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.280182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.280578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.280607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.280968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.281000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.281232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.281260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.281514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.281551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.281936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.281966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.282330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.282358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.282732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.282770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.283073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.283101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.283483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.283510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.283886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.283916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.284294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.284322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.284695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.241 [2024-11-06 14:11:44.284723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.241 qpair failed and we were unable to recover it. 00:29:58.241 [2024-11-06 14:11:44.285107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.285137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.285497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.285525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.285907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.285938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.286304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.286332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.286711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.286740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.287149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.287177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.287547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.287575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.287694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.287727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.287990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.288022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.288243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.288273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.288514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.288543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.288979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.289010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.289354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.289384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.289679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.289706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.289826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.289856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.290202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.290231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.290477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.290506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.290804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.290834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.291212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.291241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.291602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.291631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.292005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.292035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.292389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.292417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.292806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.292835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.293086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.293113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.293345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.293377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.293754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.293784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.294010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.294037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.294312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.294341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.294544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.294572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.294975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.295006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.295412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.295440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.295805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.295841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.296226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.296255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.296606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.296635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.297031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.297062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.297310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.297338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.297568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.297600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.297940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.297970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.242 [2024-11-06 14:11:44.298198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.242 [2024-11-06 14:11:44.298226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.242 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.298632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.298660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.299042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.299071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.299443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.299471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.299825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.299854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.300244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.300273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.300639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.300668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.300887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.300917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.301289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.301320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.301556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.301584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.301960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.301994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.302203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.302231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.302447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.302475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.302860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.302889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.303193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.303221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.303579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.303610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.303975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.304006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.304245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.304273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.304727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.304781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.304999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.305027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.305406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.305440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.305827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.305857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.306020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.306052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.306415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.306444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.306806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.306837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.307091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.307118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.307354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.307382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.307765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.307794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.308027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.308054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.308403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.308432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.308820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.308850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.309216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.309244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.309523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.309551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.309942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.309971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.310231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.310262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.310628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.310656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.311052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.311085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.311466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.311494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.311871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.243 [2024-11-06 14:11:44.311902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.243 qpair failed and we were unable to recover it. 00:29:58.243 [2024-11-06 14:11:44.312258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.312287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.312501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.312530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.312690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.312718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.312857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.312889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.313272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.313302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.313666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.313694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.313927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.313957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.314205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.314235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.314638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.314667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.314903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.314938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.315164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.315193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.315579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.315608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.315972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.316002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.316372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.316403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.316779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.316809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.317159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.317187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.317522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.317550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.317901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.317932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.318187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.318216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.318569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.318598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.319040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.319072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.319474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.319509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.319884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.319914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.320158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.320186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.320536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.320565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.320950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.320979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.321121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.321148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.321480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.321509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.321800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.321829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.322189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.322216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.322605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.322633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.323008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.323038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.323415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.323443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.323704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.323733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.324148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.244 [2024-11-06 14:11:44.324188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.244 qpair failed and we were unable to recover it. 00:29:58.244 [2024-11-06 14:11:44.324548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.324577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.324926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.324955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.325314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.325342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.325583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.325613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.326003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.326032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.326305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.326333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.326617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.326649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.326829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.326859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.327107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.327139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.327280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.327307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.327554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.327581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.327833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.327864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.328227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.328254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.328637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.328665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.329115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.329147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.329542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.329571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.329941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.329971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.330342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.330369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.330761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.330791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.331183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.331210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.331568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.331596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.331966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.331997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.332218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.332247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.332606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.332634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.333022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.333051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.333420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.333447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.333659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.333693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.333973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.334003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.334372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.334400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.334638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.334667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.335018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.335047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.335399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.335428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.335799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.245 [2024-11-06 14:11:44.335829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.245 qpair failed and we were unable to recover it. 00:29:58.245 [2024-11-06 14:11:44.336213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.336242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.336509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.336537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.336890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.336919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.337290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.337319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.337551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.337578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.337977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.338006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.338375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.338405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.338671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.338699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.339121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.339150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.339362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.339393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.339789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.339820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.340053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.340081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.340349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.340378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.340756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.340785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.341141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.341169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.341429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.341457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.341801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.341830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.342191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.342219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.342469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.342502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.342908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.342937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.343337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.343365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.343733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.343780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.344132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.344161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.344397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.344425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.344795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.344825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.345185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.345213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.345588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.345618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.345985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.346013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.346379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.346409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.346780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.346810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.347186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.347216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.347596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.347623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.347719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.246 [2024-11-06 14:11:44.347758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf24000b90 with addr=10.0.0.2, port=4420 00:29:58.246 qpair failed and we were unable to recover it. 00:29:58.246 [2024-11-06 14:11:44.347827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd8f30 (9): Bad file descriptor 00:29:58.246 Read completed with error (sct=0, sc=8) 00:29:58.246 starting I/O failed 00:29:58.246 Write completed with error (sct=0, sc=8) 00:29:58.246 starting I/O failed 00:29:58.246 Read completed with error (sct=0, sc=8) 00:29:58.246 starting I/O failed 00:29:58.246 Read completed with error (sct=0, sc=8) 00:29:58.246 starting I/O failed 00:29:58.246 Write completed with error (sct=0, sc=8) 00:29:58.246 starting I/O failed 00:29:58.246 Read completed with error (sct=0, sc=8) 00:29:58.246 starting I/O failed 00:29:58.246 Read completed with error (sct=0, sc=8) 00:29:58.246 starting I/O failed 00:29:58.246 Write completed with error (sct=0, sc=8) 00:29:58.246 starting I/O failed 00:29:58.246 Read completed with error (sct=0, sc=8) 00:29:58.246 starting I/O failed 00:29:58.246 Read completed with error (sct=0, sc=8) 00:29:58.246 starting I/O failed 00:29:58.246 Read completed with error (sct=0, sc=8) 00:29:58.246 starting I/O failed 00:29:58.246 Write completed with error (sct=0, sc=8) 00:29:58.246 starting I/O failed 00:29:58.246 Read completed with error (sct=0, sc=8) 00:29:58.246 starting I/O failed 00:29:58.246 Write completed with error (sct=0, sc=8) 00:29:58.246 starting I/O failed 00:29:58.246 Write completed with error (sct=0, sc=8) 00:29:58.246 starting I/O failed 00:29:58.246 Write completed with error (sct=0, sc=8) 00:29:58.246 starting I/O failed 00:29:58.246 Read completed with error (sct=0, sc=8) 00:29:58.247 starting I/O failed 00:29:58.247 Read completed with error (sct=0, sc=8) 00:29:58.247 starting I/O failed 00:29:58.247 Read completed with error (sct=0, sc=8) 00:29:58.247 starting I/O failed 00:29:58.247 Read completed with error (sct=0, sc=8) 00:29:58.247 starting I/O failed 00:29:58.247 Read completed with error (sct=0, sc=8) 00:29:58.247 starting I/O failed 00:29:58.247 Write completed with error (sct=0, sc=8) 00:29:58.247 starting I/O failed 00:29:58.247 Read completed with error (sct=0, sc=8) 00:29:58.247 starting I/O failed 00:29:58.247 Read completed with error (sct=0, sc=8) 00:29:58.247 starting I/O failed 00:29:58.247 Read completed with error (sct=0, sc=8) 00:29:58.247 starting I/O failed 00:29:58.247 Read completed with error (sct=0, sc=8) 00:29:58.247 starting I/O failed 00:29:58.247 Read completed with error (sct=0, sc=8) 00:29:58.247 starting I/O failed 00:29:58.247 Write completed with error (sct=0, sc=8) 00:29:58.247 starting I/O failed 00:29:58.247 Write completed with error (sct=0, sc=8) 00:29:58.247 starting I/O failed 00:29:58.247 Write completed with error (sct=0, sc=8) 00:29:58.247 starting I/O failed 00:29:58.247 Read completed with error (sct=0, sc=8) 00:29:58.247 starting I/O failed 00:29:58.247 Write completed with error (sct=0, sc=8) 00:29:58.247 starting I/O failed 00:29:58.247 [2024-11-06 14:11:44.348880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.247 [2024-11-06 14:11:44.349238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.349302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.349695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.349726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.350060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.350165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.350437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.350475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.350849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.350882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.351140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.351170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.351530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.351575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.351937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.351968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.352324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.352354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.352723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.352764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.352999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.353027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.353271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.353300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.353664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.353695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.354058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.354089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.354489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.354519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.354776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.354805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.355172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.355201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.355451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.355480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.355692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.355720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.356013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.356053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.356216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.356246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.356588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.356620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.356969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.357001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.357231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.357260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.357511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.357540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.357898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.357928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.358299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.358328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.358616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.358644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.358895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.358925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.359203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.359231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.359594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.359623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.360006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.360036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.247 qpair failed and we were unable to recover it. 00:29:58.247 [2024-11-06 14:11:44.360416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.247 [2024-11-06 14:11:44.360445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.360813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.360844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.361256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.361284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.361561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.361590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.361823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.361852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.362217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.362247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.362628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.362657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.363014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.363043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.363291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.363323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.363694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.363724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.364102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.364135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.364487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.364518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.364879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.364911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.365118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.365146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.365507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.365543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.365784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.365813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.366167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.366196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.366562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.366593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.366973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.367002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.367371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.367400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.367812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.367841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.368286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.368315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.368567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.368595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.368984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.369014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.369375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.369404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.369764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.369794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.370122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.370159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.370504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.370533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.370988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.371018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.371281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.371309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.371663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.371692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.371968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.372000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.372340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.372370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.372684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.372713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.373097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.373127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.373353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.373383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.373681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.373710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.374070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.374101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.374225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.248 [2024-11-06 14:11:44.374256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.248 qpair failed and we were unable to recover it. 00:29:58.248 [2024-11-06 14:11:44.374622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.374653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.375049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.375081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.375302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.375333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.375596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.375629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.376008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.376041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.376399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.376429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.376797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.376827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.377209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.377237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.377606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.377635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.377988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.378018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.378265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.378296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.378528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.378556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.378926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.378955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.379341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.379371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.379593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.379621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.379871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.379910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.380014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.380043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.380426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.380455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.380860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.380891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.381128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.381159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.381538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.381566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.381923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.381953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.382200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.382227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.382450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.382487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.382699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.382728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.382916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.382945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.383332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.383361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.383775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.383807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.384031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.384060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.384430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.384459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.384818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.384848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.385087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.385118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.385497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.385525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.385767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.249 [2024-11-06 14:11:44.385796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.249 qpair failed and we were unable to recover it. 00:29:58.249 [2024-11-06 14:11:44.386046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.386075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.386434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.386462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.386843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.386874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.387241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.387271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.387588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.387615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.388031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.388061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.388431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.388460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.388719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.388756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.389153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.389182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.389405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.389433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.389826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.389857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.390086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.390115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.390471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.390499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.390849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.390879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.391241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.391269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.391524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.391551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.391913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.391943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.392336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.392364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.392712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.392740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.393105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.393142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.393506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.393535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.393901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.393939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.394326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.394355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.394729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.394766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.395133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.395161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.395374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.395402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.395718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.395766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.396118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.396146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.396512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.396541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.396910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.396939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.397192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.397224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.397604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.397634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.397868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.397897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.398264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.398292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.398641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.398670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.399085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.399115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.399468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.399496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.250 [2024-11-06 14:11:44.399875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.250 [2024-11-06 14:11:44.399904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.250 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.400301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.400329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.400551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.400578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.400808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.400837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.401184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.401212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.401620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.401648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.401736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.401773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.402050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.402078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.402438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.402466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.402585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.402615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.402835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.402864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.403087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.403115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.403497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.403525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.403897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.403927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.404383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.404411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.404744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.404784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.405144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.405173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.405551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.405581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.406047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.406076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.406427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.406455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.406824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.406853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.407197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.407227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.407474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.407502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.407886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.407918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.408307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.408343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.408551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.408581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.408972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.409003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.409251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.409283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.409626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.409654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.410009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.410039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.410415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.410443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.410778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.410807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.411160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.411188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.411562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.411590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.411970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.411999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.412341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.412369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.412795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.412824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.413261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.413289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.413660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.251 [2024-11-06 14:11:44.413688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.251 qpair failed and we were unable to recover it. 00:29:58.251 [2024-11-06 14:11:44.414076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.414106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.414477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.414505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.414888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.414916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.415280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.415308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.415700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.415728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.416082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.416111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.416479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.416506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.416879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.416910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.417277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.417306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.417682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.417711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.418140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.418170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.418425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.418453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.418860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.418890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.419262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.419292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.419505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.419533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.419776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.419805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.420043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.420070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.420439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.420467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.420838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.420867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.421233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.421261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.421502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.421534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.421923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.421952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.422167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.422195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.422440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.422468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.422693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.422720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.423053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.423090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.423462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.423491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.423743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.423788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.424015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.424047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.424406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.424434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.424664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.424692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.425063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.425092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.425315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.425343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.425622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.425650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.425900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.425934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.426155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.426183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.426412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.426443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.426824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.426853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.252 qpair failed and we were unable to recover it. 00:29:58.252 [2024-11-06 14:11:44.427072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.252 [2024-11-06 14:11:44.427099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.427352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.427381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.427726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.427761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.428023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.428052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.428434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.428463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.428833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.428862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.429233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.429262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.429641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.429669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.430039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.430070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.430438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.430466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.430839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.430869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.431248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.431276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.431652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.431679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.431910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.431940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.432343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.432372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.432607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.432634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.432841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.432879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.433249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.433277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.433491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.433519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.433882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.433912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.434121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.434149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.434518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.434546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.434859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.434888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.435137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.435167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.435376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.435404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.435813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.435842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.436198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.436227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.436479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.436514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.436875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.436903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.437117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.437145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.437375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.437403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.437630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.437660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.438018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.438047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.438429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.438459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.438875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.438904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.439282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.439310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.439729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.439778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.440183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.440211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.253 [2024-11-06 14:11:44.440580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.253 [2024-11-06 14:11:44.440608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.253 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.440980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.441009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.441389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.441416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.441653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.441681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.442048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.442078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.442236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.442264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.442679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.442706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.442948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.442976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.443234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.443263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.443580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.443608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.443856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.443884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.444129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.444158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.444500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.444528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.444888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.444917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.445252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.445279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.445641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.445669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.446091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.446120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.446483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.446513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.446887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.446916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.447266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.447295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.447653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.447680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.448048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.448078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.448445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.448472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.448837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.448866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.449241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.449270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.449651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.449679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.450027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.450057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.450306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.450338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.450695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.450724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.451087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.451123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.451340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.451367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.451579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.451612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.451875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.451905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.452225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.254 [2024-11-06 14:11:44.452252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.254 qpair failed and we were unable to recover it. 00:29:58.254 [2024-11-06 14:11:44.452604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.452631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.452852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.452884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.453137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.453165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.453524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.453552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.453931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.453961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.454334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.454363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.454736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.454773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.454993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.455021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.455395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.455422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.455787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.455817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.456218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.456246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.456613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.456641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.456992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.457020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.457330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.457357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.457736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.457777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.457871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.457898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.458148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.458179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.458426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.458458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.458810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.458841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.459069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.459098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.459449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.459477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.459831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.459860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.460225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.460254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.460613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.460641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.460999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.461028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.461269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.461301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.461660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.461689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.462120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.462151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.462529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.462558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.462778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.462808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.463048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.463076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.463467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.463496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.463883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.463913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.464290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.464318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.464562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.464589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.464805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.464840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.465062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.465090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.465466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.465494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.255 qpair failed and we were unable to recover it. 00:29:58.255 [2024-11-06 14:11:44.465854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.255 [2024-11-06 14:11:44.465883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.466257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.466285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.466666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.466693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.466918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.466947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.467318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.467345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.467696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.467723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.468140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.468168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.468404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.468431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.468670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.468699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.469121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.469151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.469519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.469547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.469785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.469815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.470185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.470213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.470560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.470587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.470963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.470994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.471246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.471273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.471651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.471679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.472034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.472065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.472202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.472228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.472466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.472498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.472755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.472784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.473012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.473041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.473263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.473291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.473633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.473661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.473895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.473928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.474308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.474337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.474711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.474738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.475121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.475150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.475511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.475540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.475913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.475941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.476310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.476338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.476720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.476756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.476962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.476990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.477394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.477422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.477815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.477846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.478253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.478281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.478700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.478729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.479105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.479141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.479485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.256 [2024-11-06 14:11:44.479514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.256 qpair failed and we were unable to recover it. 00:29:58.256 [2024-11-06 14:11:44.479877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.479906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.480143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.480170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.480462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.480490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.480718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.480753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.481071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.481099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.481460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.481489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.481699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.481726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.481853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.481883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.482345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.482462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.482637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.482672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.483091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.483193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.483355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.483390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.483784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.483817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.484206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.484235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.484598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.484626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.484999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.485030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.485394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.485424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.485783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.485813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.486052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.486082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.486327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.486356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.486452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.486479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.487161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.487271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.487735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.487795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.488043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.488072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.488432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.488463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.488687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.488728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.488993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.489023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.489238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.489267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.489634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.489662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.489955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.489990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.490257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.490286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.490395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.490422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.490774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.490807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.491180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.491210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.491657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.491686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.492061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.492092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.492473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.492502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.492869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.492899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.257 qpair failed and we were unable to recover it. 00:29:58.257 [2024-11-06 14:11:44.493288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.257 [2024-11-06 14:11:44.493317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.258 qpair failed and we were unable to recover it. 00:29:58.258 [2024-11-06 14:11:44.493557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.258 [2024-11-06 14:11:44.493586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.258 qpair failed and we were unable to recover it. 00:29:58.258 [2024-11-06 14:11:44.493984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.258 [2024-11-06 14:11:44.494014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.258 qpair failed and we were unable to recover it. 00:29:58.258 [2024-11-06 14:11:44.494230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.258 [2024-11-06 14:11:44.494258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.258 qpair failed and we were unable to recover it. 00:29:58.258 [2024-11-06 14:11:44.494490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.258 [2024-11-06 14:11:44.494520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.258 qpair failed and we were unable to recover it. 00:29:58.258 [2024-11-06 14:11:44.494741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.258 [2024-11-06 14:11:44.494788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.258 qpair failed and we were unable to recover it. 00:29:58.258 [2024-11-06 14:11:44.495026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.258 [2024-11-06 14:11:44.495055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.258 qpair failed and we were unable to recover it. 00:29:58.258 [2024-11-06 14:11:44.495438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.258 [2024-11-06 14:11:44.495466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.258 qpair failed and we were unable to recover it. 00:29:58.258 [2024-11-06 14:11:44.495865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.258 [2024-11-06 14:11:44.495895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.258 qpair failed and we were unable to recover it. 00:29:58.258 [2024-11-06 14:11:44.496280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.258 [2024-11-06 14:11:44.496309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.258 qpair failed and we were unable to recover it. 00:29:58.258 [2024-11-06 14:11:44.496687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.258 [2024-11-06 14:11:44.496714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.258 qpair failed and we were unable to recover it. 00:29:58.258 [2024-11-06 14:11:44.497074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.258 [2024-11-06 14:11:44.497107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.258 qpair failed and we were unable to recover it. 00:29:58.258 [2024-11-06 14:11:44.497343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.258 [2024-11-06 14:11:44.497376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.258 qpair failed and we were unable to recover it. 00:29:58.258 [2024-11-06 14:11:44.497759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.258 [2024-11-06 14:11:44.497789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.258 qpair failed and we were unable to recover it. 00:29:58.258 [2024-11-06 14:11:44.498155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.258 [2024-11-06 14:11:44.498191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.258 qpair failed and we were unable to recover it. 00:29:58.258 [2024-11-06 14:11:44.498565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.258 [2024-11-06 14:11:44.498593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.258 qpair failed and we were unable to recover it. 00:29:58.258 [2024-11-06 14:11:44.499042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.258 [2024-11-06 14:11:44.499072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.258 qpair failed and we were unable to recover it. 00:29:58.258 [2024-11-06 14:11:44.499288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.258 [2024-11-06 14:11:44.499316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.258 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.499719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.499761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.500141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.500172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.500536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.500564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.500971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.501002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.501442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.501470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.501687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.501715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.501988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.502019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.502276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.502309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.502541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.502571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.502811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.502846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.503239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.503268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.503638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.503667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.504023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.504052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.504317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.504346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.504700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.504729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.505035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.505064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.505445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.505473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.505691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.505719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.506121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.506151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.506388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.506416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.506802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.506832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.507138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.507166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.507422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.507450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.507659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.507693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.508088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.508117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.508496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.508524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.508901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.508930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.509307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.509336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.509682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.509711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.510099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.510129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.510263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.510290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.510731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.510770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.511122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.511152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.511415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-11-06 14:11:44.511443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-11-06 14:11:44.511767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.511796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.512134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.512162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.512409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.512438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.512805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.512835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.513213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.513240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.513618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.513647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.514092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.514121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.514494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.514523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.514876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.514906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.515286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.515316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.515529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.515556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.515932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.515962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.516179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.516207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.516490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.516518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.516776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.516807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.517144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.517172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.517369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.517397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.517634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.517663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.518047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.518077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.518294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.518323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.518617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.518644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.519011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.519040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.519246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.519275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.519646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.519675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.519805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.519833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.520072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.520102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.520322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.520351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.520710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.520740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.521129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.521159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.521543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.521571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.521949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.521980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.522340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.522369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.522742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.522782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.522995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.523023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.523379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.523408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.523800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.523829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.524218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.524246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.524475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.524502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-11-06 14:11:44.524768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-11-06 14:11:44.524798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.524892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.524919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.525271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.525299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.525669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.525698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.526105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.526135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.526275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.526302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.526564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.526600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.526977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.527008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.527357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.527387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.527772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.527801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.528156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.528186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.528551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.528580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.528936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.528966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.529343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.529373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.529761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.529793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.530027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.530056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.530400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.530431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.530801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.530832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.531208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.531238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.531498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.531542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.531804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.531834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.532194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.532225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.532565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.532595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.532808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.532837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.533220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.533249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.533380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.533407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.533654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.533682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.534055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.534085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.534449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.534477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.534823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.534853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.535228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.535257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.535633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.535663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.536018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.536047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.536427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.536456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.536824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.536853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.537239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.537268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.537451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.537478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.537864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.537893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.538119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.538146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-11-06 14:11:44.538391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-11-06 14:11:44.538419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.538637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.538666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.538928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.538958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.539199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.539227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.539571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.539599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.539819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.539848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.540223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.540253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.540497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.540531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.540755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.540785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.541203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.541233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.541595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.541624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.541995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.542025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.542393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.542423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.542639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.542669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.543039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.543072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.543454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.543483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.543828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.543858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.544193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.544221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.544313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.544340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.544589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.544617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.544862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.544890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.545277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.545305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.545529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.545557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.545930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.545959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.546315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.546345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.546552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.546582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.546845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.546875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.547246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.547274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.547648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.547677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.548090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.548118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.548372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.548401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.548823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.548854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.549204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.549234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.549612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.549641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.549993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.550023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.550405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.550435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.550814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.550845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.551220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-11-06 14:11:44.551248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-11-06 14:11:44.551509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.551537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.551923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.551952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.552311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.552338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.552714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.552742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.553092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.553122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.553317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.553347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.553782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.553812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.554190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.554219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.554436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.554464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.554834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.554865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.555251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.555281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.555619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.555648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.555948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.555977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.556219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.556246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.556532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.556561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.556917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.556946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.557313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.557342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.557710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.557738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.558132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.558161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.558384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.558412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.558808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.558838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.559224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.559253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.559624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.559653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.560013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.560043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.560258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.560287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.560663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.560692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.561073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.561104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.561441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.561470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.561832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.561863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.562103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.562132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.562479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.562509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.562622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-11-06 14:11:44.562648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-11-06 14:11:44.563006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.563036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.563410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.563438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.563814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.563848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.564239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.564268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.564636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.564666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.564962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.564998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.565244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.565271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.565638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.565666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.566033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.566062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.566425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.566452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.566905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.566935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.567301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.567330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.567553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.567581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.567807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.567836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.568193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.568221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.568315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.568342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccb010 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.568833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.568926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.569321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.569356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.569653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.569683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.570066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.570173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.570481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.570520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.570771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.570804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.571044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.571074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.571426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.571456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.571736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.571785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.572011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.572040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.572271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.572300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.572647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.572676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.573072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.573102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.573463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.573491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.573717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.573756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.574153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.574182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.574553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.574583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.575016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.575045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.575247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.575276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.575652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.575681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.576001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.576030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.576148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.576179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-11-06 14:11:44.576512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-11-06 14:11:44.576541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.576893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.576923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.577311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.577339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.577684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.577713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.578073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.578103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.578461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.578491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.578903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.578933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.579313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.579350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.579597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.579626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.579855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.579884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.580255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.580284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.580643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.580672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.581040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.581070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.581443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.581472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.581560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.581586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.582115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.582216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.582726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.582787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.583067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.583098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.583342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.583370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.583829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.583882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.584261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.584291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.584651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.584681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.585068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.585099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.585500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.585529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.585881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.585911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.586298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.586326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.586570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.586606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.586955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.586985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.587341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.587369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.587597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.587624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.588056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.588086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.588400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.588429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.588799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.588828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.589184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.589213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.589460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.589490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.589767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.589801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.590220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.590248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-11-06 14:11:44.590453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-11-06 14:11:44.590481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.590854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.590884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.590980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.591006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.591253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.591281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.591412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.591445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.591700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.591728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.592082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.592113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.592455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.592484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.592890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.592920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.593176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.593203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.593550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.593585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.593837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.593866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.594138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.594167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.594545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.594574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.594846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.594878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.595244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.595273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.595621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.595649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.596033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.596063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.596436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.596466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.596709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.596742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.597125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.597155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.597531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.597563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.597789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.597819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.598081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.598112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.598369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.598400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.598782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.598813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.599088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.599117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.599499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.599529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.599894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.599924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.600218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.600246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.600651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.600683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.601040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.601069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.601437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.601467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.601830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.601863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.602081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.602110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.602473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.602503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.602877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.602908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.603359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.603389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.603742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-11-06 14:11:44.603788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-11-06 14:11:44.604119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.604148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.604511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.604540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.604774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.604806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.605019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.605050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.605428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.605457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.605845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.605877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.606243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.606275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.606492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.606520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.606878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.606908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.607139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.607168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.607445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.607475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.607834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.607872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.607971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.608000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.608384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.608412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.608778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.608807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.609199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.609228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.609581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.609610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.609986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.610016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.610347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.610377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.610631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.610658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.610966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.610996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.611440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.611470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.611845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.611876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.612102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.612129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.612362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.612396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.612766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.612797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.613161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.613191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.613361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.613390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.613614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.613642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.613851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.613880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.614261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.614291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.614514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.614542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.614945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.614976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.615334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.615363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.615723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.615759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.616060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.616089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.616440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.616468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.616705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-11-06 14:11:44.616732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-11-06 14:11:44.616851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.616880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.617170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.617200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.617434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.617463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.617829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.617858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.618235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.618263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.618515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.618546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.618928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.618958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.619347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.619376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.619766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.619795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.619925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.619952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.620325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.620354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.620455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.620483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.620609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.620638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.620854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.620890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.621251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.621281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.621499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.621527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.621898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.621928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.622307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.622337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.622702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.622730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.623111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.623142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.623504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.623531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.623791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.623823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.624052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.624080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.624452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.624482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.624865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.624894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.625257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.625286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.625503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.625532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.625729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.625770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.626123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.626152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.626530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.626559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.626924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.626956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.627202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.627231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.627472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.627501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-11-06 14:11:44.627731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-11-06 14:11:44.627767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.628114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.628142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.628556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.628585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.628911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.628941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.629298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.629328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.629557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.629586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.629922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.629951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.630234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.630265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.630619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.630649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.631064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.631095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.631311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.631339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.631722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.631759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.631981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.632012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.632386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.632417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.632813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.632841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.633070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.633099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.633460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.633489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.633599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.633630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.633729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.633765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.634001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.634031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.634244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.634273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.634666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.634694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.635048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.635079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.635427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.635455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.635824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.635855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.636216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.636246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.636459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.636489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.636858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.636889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.637098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.637126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.637507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.637535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.637766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.637798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.638166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.638194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.638444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.638476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.638833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.638865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.639219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.639252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.639532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.639562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.639918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.639949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.640325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-11-06 14:11:44.640354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-11-06 14:11:44.640627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.640659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.641035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.641068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.641314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.641344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.641689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.641717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.642060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.642089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.642461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.642489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.642730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.642780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.643145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.643174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.643386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.643416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.643650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.643685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.643981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.644013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.644277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.644307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.644705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.644735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.644963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.644993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.645261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.645292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.645528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.645561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.645840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.645873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.646245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.646274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.646616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.646646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.647018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.647049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.647257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.647289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.647668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.647697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.647804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.647832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.648117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.648146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.648377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.648407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.648685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.648715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.648962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.648992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.649378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.649407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.649772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.649804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.650169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.650198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.650426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.650454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.650675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.650703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.650978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.651007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.651383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.651412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.651778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.651807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.652087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.652116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.652480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.652510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.652945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.652975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-11-06 14:11:44.653215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-11-06 14:11:44.653243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.653499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.653529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.653786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.653819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.654051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.654081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.654319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.654352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.654442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.654471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.654889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.654996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.655453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.655492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.655723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.655771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.656165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.656274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.656720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.656775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.656995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.657038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.657437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.657467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.658025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.658128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.658586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.658623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.658987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.659021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.659272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.659302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.659548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.659577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.659960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.659992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.660374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.660404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.660759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.660790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.661022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.661052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.661405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.661435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.661807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.661838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.662065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.662095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.662488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.662518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.662899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.662930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.663297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.663328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.663592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.663622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.663870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.663900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.664150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.664179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.664620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.664650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.664995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.665035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.665262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.665295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.665579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.665608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.665941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.665972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.666389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.666419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-11-06 14:11:44.666786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-11-06 14:11:44.666818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.667192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.667222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.667602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.667631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.668044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.668075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.668451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.668480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.668700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.668729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.668959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.668991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.669215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.669247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.669574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.669603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.669807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.669840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.670154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.670186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.670420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.670450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.670696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.670731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.671128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.671160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.671541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.671577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.671972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.672002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.672338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.672367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.672780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.672810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.673165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.673197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.673427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.673456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.673726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.673766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.674122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.674152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.674379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.674410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.674665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.674694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.675136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.675168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.675531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.675560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.675928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.675960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.676329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.676359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.676613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.676646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.676866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.676896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.677266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.677294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.677662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-11-06 14:11:44.677692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-11-06 14:11:44.678045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.678075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.678279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.678310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.678558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.678591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.678822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.678855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.679072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.679106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.679380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.679408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.679654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.679683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.679919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.679949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.680206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.680237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.680598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.680628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.680982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.681013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.681364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.681394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.681765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.681796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.682155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.682184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.682538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.682568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.682941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.682973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.683345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.683374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.683762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.683794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.684026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.684055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.684425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.684455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.684766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.684796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.685170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.685200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.685570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.685606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.685950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.685981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.686351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.686380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.686781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.686811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.687215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.687245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.687620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.687650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.688022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.688052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.688415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.688445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.688785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.688815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.689183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.689213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.689584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.689612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.690074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.690104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.690323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.690351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.690775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.690804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.691169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.691198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.691561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-11-06 14:11:44.691588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-11-06 14:11:44.691794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.691823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.692027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.692054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.692158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.692188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.692544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.692572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.692949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.692979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.693340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.693367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.693728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.693770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.694182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.694211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.694574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.694601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.694992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.695022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.695419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.695447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.695861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.695892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.696112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.696140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.696406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.696440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.696607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.696636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.696981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.697011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.697240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.697268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.697652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.697680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.697906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.697935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.698302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.698331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.698588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.698616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.698824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.698853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.699317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.699345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.699603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.699631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.699994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.700030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.700277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.700305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.700651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.700679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.701038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.701068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.701428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.701455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.701822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.701851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.702199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.702227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.702455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.702482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.702888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.702919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.703280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.703308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.703520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.703549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.703800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.703829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.704047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.704075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.704328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.704356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-11-06 14:11:44.704782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-11-06 14:11:44.704812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.705029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.705056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.705309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.705337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.705697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.705725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.705977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.706009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.706378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.706406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.706636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.706665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.707060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.707091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.707466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.707493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.707933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.707962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.708338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.708366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.708776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.708804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.709165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.709195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.709594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.709623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.710088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.710117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.710216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.710245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.710379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.710407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.710591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.710619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.710866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.710896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.711127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.711155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.711375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.711403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.711686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.711713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.711967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.711996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.712261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.712290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.712663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.712691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.712836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.712869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.713213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.713254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.713661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.713690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.713865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.713894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.714299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.714329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.714706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.714734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.715142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.715171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.715581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.715609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.715988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.716018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.716450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.716478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.716724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.716760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.717150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.717178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.717556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.717584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-11-06 14:11:44.717813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-11-06 14:11:44.717843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.718188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.718215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.718450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.718478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.718717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.718765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.719151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.719180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.719556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.719584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.719975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.720005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.720396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.720425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.720675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.720703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.720800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.720827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.721091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.721122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.721458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.721486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.721755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.721789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.722153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.722181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.722558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.722586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.722961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.722991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.723234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.723262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.723632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.723661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.724056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.724085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.724331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.724361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.724736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.724792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.725155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.725184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.725537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.725565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.725920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.725950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.726333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.726361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.726730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.726779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.727014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.727064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.727475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.727526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.727810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.727868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:58.551 [2024-11-06 14:11:44.728287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.728340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:58.551 [2024-11-06 14:11:44.728625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.728675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:58.551 [2024-11-06 14:11:44.728978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.729028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:58.551 [2024-11-06 14:11:44.729434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.729482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.551 [2024-11-06 14:11:44.729883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.729954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.730365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.730405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.730652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.730683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.731056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-11-06 14:11:44.731088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-11-06 14:11:44.731311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.731339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.731725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.731819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.732204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.732235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.732664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.732695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.732854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.732884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.733317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.733347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.733701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.733731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.734009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.734041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.734334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.734364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.734766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.734799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.735174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.735202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.735605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.735633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.736022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.736052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.736323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.736351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.736709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.736736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.736991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.737019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.737474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.737504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.737857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.737886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.738262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.738292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.738659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.738690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.738946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.738977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.739332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.739361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.739740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.739787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.740213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.740244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.740484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.740512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.740791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.740824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.741159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.741187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.741418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.741448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.741731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.741779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.742030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.742067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.742314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.742346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.742594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.742622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.742939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.742968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-11-06 14:11:44.743346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-11-06 14:11:44.743375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.743619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.743653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.744033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.744067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.744425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.744455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.744677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.744705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.745079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.745109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.745476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.745504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.745871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.745900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.746140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.746168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.746518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.746551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.746941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.746970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.747348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.747376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.747592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.747619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.747880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.747908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.748275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.748305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.748615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.748649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.749014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.749044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.749259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.749286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.749485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.749514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.749653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.749682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.750108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.750139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.750349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.750377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.750682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.750710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.751160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.751190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.751428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.751460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.751843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.751873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.752260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.752288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.752663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.752692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.753120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.753153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.753505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.753534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.753913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.753941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.754293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.754325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.754552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.754583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.754956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.754985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.755214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.755241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.755581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.755610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.755972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.756007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.756368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.756397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-11-06 14:11:44.756769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-11-06 14:11:44.756800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.757160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.757189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.757562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.757590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.757934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.757962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.758064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.758093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.758375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.758403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.758708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.758737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.758986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.759018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.759259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.759291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.759650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.759678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.760055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.760086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.760444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.760473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.760846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.760876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.761220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.761247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.761631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.761659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.762077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.762106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.762480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.762509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.762791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.762823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.763166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.763195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.763450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.763479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.763850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.763882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.764174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.764202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.764289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.764315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.764413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.764442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.764814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.764845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.765238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.765268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.765491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.765519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.765869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.765898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.766261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.766291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.766684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.766713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.766941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.766974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.767331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.767361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.767676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.767707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.767937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.767967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.768192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.768223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.768620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.768648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.768889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.768920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.769381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.769409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.769620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-11-06 14:11:44.769655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-11-06 14:11:44.769895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.769927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.770310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.770339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.770708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.770742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.770960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.770988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.771341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.771370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.771767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.771797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.772160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.772187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.772560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.772589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b9 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.555 0 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.772971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.773002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:58.555 [2024-11-06 14:11:44.773366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.555 [2024-11-06 14:11:44.773398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.555 [2024-11-06 14:11:44.773765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.773797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.774204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.774234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.774603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.774630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.774878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.774907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.775268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.775296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.775391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.775416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.775817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.775924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.776232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.776269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.776644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.776674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.777036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.777071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.777314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.777343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.777595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.777623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.777993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.778024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.778285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.778312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.778561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.778591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.778797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.778827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.779048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.779075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.779444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.779472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.779837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.779867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.780252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.780279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.780555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.780583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.780924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.780955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.781334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.781363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.781699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.781727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.782014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.782049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.782437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.555 [2024-11-06 14:11:44.782465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.555 qpair failed and we were unable to recover it. 00:29:58.555 [2024-11-06 14:11:44.782626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.782654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.782914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.782951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.783240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.783269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.783632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.783660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.783903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.783931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.784316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.784344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.784581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.784609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.784944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.784975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.785338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.785367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.785741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.785781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.786010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.786038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.786402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.786430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.786699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.786726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.787104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.787133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.787509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.787537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.787923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.787953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.788339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.788368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.788742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.788782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.789173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.789201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.789539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.789568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.789778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.789810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.790176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.790205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.790593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.790623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.790968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.790997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.791355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.791383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.791489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.791519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf20000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.792012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.792114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.792555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.792592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.792987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.793021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.793412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.793444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.793695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.793730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.793975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.794006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.794414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.794443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.794687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.794715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.556 qpair failed and we were unable to recover it. 00:29:58.556 [2024-11-06 14:11:44.794982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.556 [2024-11-06 14:11:44.795013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.557 qpair failed and we were unable to recover it. 00:29:58.557 [2024-11-06 14:11:44.795420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.557 [2024-11-06 14:11:44.795449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.557 qpair failed and we were unable to recover it. 00:29:58.557 [2024-11-06 14:11:44.795826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.557 [2024-11-06 14:11:44.795856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.557 qpair failed and we were unable to recover it. 00:29:58.557 [2024-11-06 14:11:44.796241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.557 [2024-11-06 14:11:44.796270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.557 qpair failed and we were unable to recover it. 00:29:58.557 [2024-11-06 14:11:44.796656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.557 [2024-11-06 14:11:44.796687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.557 qpair failed and we were unable to recover it. 00:29:58.557 [2024-11-06 14:11:44.796966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.557 [2024-11-06 14:11:44.796995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.557 qpair failed and we were unable to recover it. 00:29:58.557 [2024-11-06 14:11:44.797365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.557 [2024-11-06 14:11:44.797394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.557 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.797779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.797828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.798225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.798254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.798484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.798514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.798879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.798910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.799289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.799319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.799682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.799711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.800077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.800107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.800468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.800499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.800874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.800903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.801166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.801198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.801329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.801357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.801710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.801738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.802003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.802032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.802479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.802507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.802769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.802800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.803035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.803067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.803320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.803348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.803585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.803618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.803974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.804004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.804234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.804262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.804642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.804671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.805041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.805071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.805318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.805346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.805725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.805761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.806054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.806083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.806458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.806486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.806583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.806610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.806886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.806916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.807128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.807157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.819 qpair failed and we were unable to recover it. 00:29:58.819 [2024-11-06 14:11:44.807268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.819 [2024-11-06 14:11:44.807300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.807656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.807685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.808125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.808154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.808402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.808431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.808668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.808697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.809044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.809073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.809437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.809465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.809841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.809872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.810120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.810152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.810536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.810564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 Malloc0 00:29:58.820 [2024-11-06 14:11:44.810937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.811000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.811447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.811513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.820 [2024-11-06 14:11:44.811914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.811967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:58.820 [2024-11-06 14:11:44.812360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.812408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.820 [2024-11-06 14:11:44.812810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.812858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.820 [2024-11-06 14:11:44.813276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.813326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.813684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.813715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.814121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.814152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.814327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.814354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.814568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.814596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.814970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.815000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.815313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.815342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.815683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.815711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.816128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.816158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.816517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.816545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.816807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.816841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.817082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.817111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.817478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.817506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.817871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.817902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.818290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.818297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.820 [2024-11-06 14:11:44.818319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.818546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.818573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.818800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.818829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.819175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.819203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.819444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.819472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.819719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.819758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.820075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.820104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.820459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.820488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.820788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.820818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.821182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.821210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.821458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.821486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.821923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.821954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.822198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.822226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.822602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.822630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.822979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.823009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.823232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.823259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.823627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.823655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.823804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.823833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.824223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.824251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.824478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.824506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.824909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.824944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.825312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.825339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.825582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.825609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.825856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.825888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.826234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.826262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.826787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.826844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.820 [2024-11-06 14:11:44.827268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.827319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.827525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.827570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b9 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.820 0 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.820 [2024-11-06 14:11:44.827978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.828028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.820 [2024-11-06 14:11:44.828352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.828405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.828789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.828829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.829069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.829098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.829477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.829506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.829872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.829903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.830278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.830307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.830566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.830594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.830975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.831007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.831343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.831371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.831830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.820 [2024-11-06 14:11:44.831859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.820 qpair failed and we were unable to recover it. 00:29:58.820 [2024-11-06 14:11:44.832238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.832268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.832645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.832673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.833041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.833070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.833428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.833456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.833905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.833934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.834300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.834328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.834588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.834616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.835002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.835032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.835419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.835448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.835844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.835875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.836240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.836268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.836647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.836674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.837056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.837086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.837308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.837337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.837644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.837672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.837899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.837932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.838282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.838311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.838529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.838563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.838835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.838891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.839183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.839229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.821 [2024-11-06 14:11:44.839554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.839603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.821 [2024-11-06 14:11:44.840020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.840074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.821 [2024-11-06 14:11:44.840367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.840414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.840822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.840866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.841211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.841242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.841581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.841610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.841823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.841852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.842082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.842110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.842494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.842522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.842778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.842807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.842974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.843006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.843401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.843431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.843676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.843707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.844102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.844132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.844501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.844529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.844898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.844928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.845154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.845182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.845371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.845404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.845789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.845819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.846187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.846214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.846325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.846351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.846755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.846786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.847023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.847052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.847441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.847471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.847836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.847874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.848132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.848162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.848536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.848565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.848807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.848838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.849231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.849260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.849626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.849655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.850004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.850034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.850444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.850474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.850838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.850894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.851149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.851196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.821 [2024-11-06 14:11:44.851477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.851526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.821 [2024-11-06 14:11:44.851941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.851998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.821 [2024-11-06 14:11:44.852315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.852366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.852651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.852711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.853194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.853230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.853468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.853498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.853866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.853897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.854198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.854228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.854589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.854619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.855000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.855031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.821 qpair failed and we were unable to recover it. 00:29:58.821 [2024-11-06 14:11:44.855270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.821 [2024-11-06 14:11:44.855305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.855545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.822 [2024-11-06 14:11:44.855575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.855837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.822 [2024-11-06 14:11:44.855870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.856246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.822 [2024-11-06 14:11:44.856278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.856497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.822 [2024-11-06 14:11:44.856528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.856805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.822 [2024-11-06 14:11:44.856838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.857223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.822 [2024-11-06 14:11:44.857253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.857630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.822 [2024-11-06 14:11:44.857659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.858016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.822 [2024-11-06 14:11:44.858048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.858414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.822 [2024-11-06 14:11:44.858443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf2c000b90 with addr=10.0.0.2, port=4420 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.858699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.822 [2024-11-06 14:11:44.859929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:44.860097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:44.860150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:44.860175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:44.860196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:44.860253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.822 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:58.822 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.822 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.822 [2024-11-06 14:11:44.869447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:44.869567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:44.869616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:44.869640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:44.869663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:44.869718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.822 14:11:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2600808 00:29:58.822 [2024-11-06 14:11:44.879366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:44.879463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:44.879498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:44.879515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:44.879529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:44.879583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.889461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:44.889553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:44.889577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:44.889590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:44.889599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:44.889625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.899493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:44.899579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:44.899597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:44.899604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:44.899611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:44.899628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.909281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:44.909344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:44.909361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:44.909369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:44.909375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:44.909392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.919471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:44.919531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:44.919549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:44.919556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:44.919563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:44.919580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.929460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:44.929532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:44.929550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:44.929557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:44.929564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:44.929581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.939556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:44.939672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:44.939689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:44.939697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:44.939703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:44.939720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.949579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:44.949648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:44.949665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:44.949673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:44.949679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:44.949696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.959602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:44.959667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:44.959688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:44.959696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:44.959702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:44.959718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.969600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:44.969669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:44.969686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:44.969693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:44.969700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:44.969716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.979630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:44.979706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:44.979723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:44.979731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:44.979737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:44.979761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.989633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:44.989697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:44.989713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:44.989721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:44.989727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:44.989743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:44.999663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:44.999730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:44.999752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:44.999760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:44.999772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:44.999790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:45.009721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:45.009802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:45.009819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:45.009826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:45.009833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:45.009850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:45.019740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:45.019810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:45.019827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:45.019835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:45.019841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:45.019857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:45.029744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:45.029811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:45.029827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:45.029835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:45.029841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:45.029858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:45.039808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:45.039871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.822 [2024-11-06 14:11:45.039888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.822 [2024-11-06 14:11:45.039895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.822 [2024-11-06 14:11:45.039902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.822 [2024-11-06 14:11:45.039919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.822 qpair failed and we were unable to recover it. 00:29:58.822 [2024-11-06 14:11:45.049845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.822 [2024-11-06 14:11:45.049911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.823 [2024-11-06 14:11:45.049928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.823 [2024-11-06 14:11:45.049935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.823 [2024-11-06 14:11:45.049942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.823 [2024-11-06 14:11:45.049958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.823 qpair failed and we were unable to recover it. 00:29:58.823 [2024-11-06 14:11:45.059922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.823 [2024-11-06 14:11:45.060004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.823 [2024-11-06 14:11:45.060021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.823 [2024-11-06 14:11:45.060028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.823 [2024-11-06 14:11:45.060035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.823 [2024-11-06 14:11:45.060051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.823 qpair failed and we were unable to recover it. 00:29:58.823 [2024-11-06 14:11:45.069897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.823 [2024-11-06 14:11:45.070017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.823 [2024-11-06 14:11:45.070034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.823 [2024-11-06 14:11:45.070041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.823 [2024-11-06 14:11:45.070048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.823 [2024-11-06 14:11:45.070064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.823 qpair failed and we were unable to recover it. 00:29:58.823 [2024-11-06 14:11:45.079858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.823 [2024-11-06 14:11:45.079925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.823 [2024-11-06 14:11:45.079941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.823 [2024-11-06 14:11:45.079948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.823 [2024-11-06 14:11:45.079955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.823 [2024-11-06 14:11:45.079971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.823 qpair failed and we were unable to recover it. 00:29:58.823 [2024-11-06 14:11:45.090021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.823 [2024-11-06 14:11:45.090097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.823 [2024-11-06 14:11:45.090118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.823 [2024-11-06 14:11:45.090126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.823 [2024-11-06 14:11:45.090133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:58.823 [2024-11-06 14:11:45.090148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.823 qpair failed and we were unable to recover it. 00:29:59.085 [2024-11-06 14:11:45.100049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.085 [2024-11-06 14:11:45.100119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.085 [2024-11-06 14:11:45.100137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.085 [2024-11-06 14:11:45.100145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.085 [2024-11-06 14:11:45.100152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.085 [2024-11-06 14:11:45.100168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.085 qpair failed and we were unable to recover it. 00:29:59.085 [2024-11-06 14:11:45.109904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.085 [2024-11-06 14:11:45.109970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.085 [2024-11-06 14:11:45.109988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.085 [2024-11-06 14:11:45.109996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.085 [2024-11-06 14:11:45.110002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.085 [2024-11-06 14:11:45.110019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.085 qpair failed and we were unable to recover it. 00:29:59.085 [2024-11-06 14:11:45.120071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.085 [2024-11-06 14:11:45.120135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.085 [2024-11-06 14:11:45.120151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.085 [2024-11-06 14:11:45.120159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.085 [2024-11-06 14:11:45.120166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.085 [2024-11-06 14:11:45.120183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.085 qpair failed and we were unable to recover it. 00:29:59.085 [2024-11-06 14:11:45.130091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.085 [2024-11-06 14:11:45.130160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.085 [2024-11-06 14:11:45.130177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.085 [2024-11-06 14:11:45.130185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.085 [2024-11-06 14:11:45.130198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.085 [2024-11-06 14:11:45.130216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.085 qpair failed and we were unable to recover it. 00:29:59.085 [2024-11-06 14:11:45.140189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.085 [2024-11-06 14:11:45.140258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.085 [2024-11-06 14:11:45.140275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.085 [2024-11-06 14:11:45.140284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.085 [2024-11-06 14:11:45.140290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.085 [2024-11-06 14:11:45.140307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.085 qpair failed and we were unable to recover it. 00:29:59.085 [2024-11-06 14:11:45.150162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.085 [2024-11-06 14:11:45.150223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.085 [2024-11-06 14:11:45.150239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.085 [2024-11-06 14:11:45.150247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.085 [2024-11-06 14:11:45.150256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.085 [2024-11-06 14:11:45.150272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.085 qpair failed and we were unable to recover it. 00:29:59.085 [2024-11-06 14:11:45.160202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.085 [2024-11-06 14:11:45.160269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.085 [2024-11-06 14:11:45.160286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.085 [2024-11-06 14:11:45.160293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.085 [2024-11-06 14:11:45.160301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.085 [2024-11-06 14:11:45.160318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.085 qpair failed and we were unable to recover it. 00:29:59.085 [2024-11-06 14:11:45.170210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.085 [2024-11-06 14:11:45.170276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.085 [2024-11-06 14:11:45.170294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.085 [2024-11-06 14:11:45.170301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.085 [2024-11-06 14:11:45.170308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.085 [2024-11-06 14:11:45.170324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.085 qpair failed and we were unable to recover it. 00:29:59.085 [2024-11-06 14:11:45.180291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.085 [2024-11-06 14:11:45.180369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.085 [2024-11-06 14:11:45.180386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.085 [2024-11-06 14:11:45.180393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.085 [2024-11-06 14:11:45.180404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.085 [2024-11-06 14:11:45.180421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.085 qpair failed and we were unable to recover it. 00:29:59.085 [2024-11-06 14:11:45.190249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.085 [2024-11-06 14:11:45.190312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.085 [2024-11-06 14:11:45.190329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.085 [2024-11-06 14:11:45.190337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-06 14:11:45.190343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.086 [2024-11-06 14:11:45.190359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-06 14:11:45.200328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-06 14:11:45.200408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-06 14:11:45.200424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-06 14:11:45.200431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-06 14:11:45.200438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.086 [2024-11-06 14:11:45.200454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-06 14:11:45.210320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-06 14:11:45.210391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-06 14:11:45.210410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-06 14:11:45.210419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-06 14:11:45.210427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.086 [2024-11-06 14:11:45.210446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-06 14:11:45.220370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-06 14:11:45.220453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-06 14:11:45.220473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-06 14:11:45.220481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-06 14:11:45.220487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.086 [2024-11-06 14:11:45.220503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-06 14:11:45.230375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-06 14:11:45.230479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-06 14:11:45.230515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-06 14:11:45.230525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-06 14:11:45.230534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.086 [2024-11-06 14:11:45.230558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-06 14:11:45.240398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-06 14:11:45.240467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-06 14:11:45.240503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-06 14:11:45.240513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-06 14:11:45.240521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.086 [2024-11-06 14:11:45.240544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-06 14:11:45.250451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-06 14:11:45.250521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-06 14:11:45.250540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-06 14:11:45.250547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-06 14:11:45.250554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.086 [2024-11-06 14:11:45.250573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-06 14:11:45.260511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-06 14:11:45.260587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-06 14:11:45.260604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-06 14:11:45.260617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-06 14:11:45.260624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.086 [2024-11-06 14:11:45.260641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-06 14:11:45.270508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-06 14:11:45.270572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-06 14:11:45.270590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-06 14:11:45.270597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-06 14:11:45.270604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.086 [2024-11-06 14:11:45.270620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-06 14:11:45.280397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-06 14:11:45.280463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-06 14:11:45.280480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-06 14:11:45.280487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-06 14:11:45.280494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.086 [2024-11-06 14:11:45.280511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-06 14:11:45.290579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-06 14:11:45.290647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-06 14:11:45.290663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-06 14:11:45.290670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-06 14:11:45.290677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.086 [2024-11-06 14:11:45.290692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-06 14:11:45.300630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-06 14:11:45.300736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-06 14:11:45.300759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-06 14:11:45.300766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-06 14:11:45.300773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.086 [2024-11-06 14:11:45.300796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-06 14:11:45.310610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-06 14:11:45.310678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-06 14:11:45.310695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-06 14:11:45.310702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-06 14:11:45.310708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.086 [2024-11-06 14:11:45.310725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-06 14:11:45.320631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-06 14:11:45.320696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-06 14:11:45.320713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-06 14:11:45.320725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-06 14:11:45.320732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.087 [2024-11-06 14:11:45.320754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.087 [2024-11-06 14:11:45.330627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.087 [2024-11-06 14:11:45.330701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.087 [2024-11-06 14:11:45.330719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.087 [2024-11-06 14:11:45.330727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.087 [2024-11-06 14:11:45.330733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.087 [2024-11-06 14:11:45.330756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.087 [2024-11-06 14:11:45.340628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.087 [2024-11-06 14:11:45.340705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.087 [2024-11-06 14:11:45.340726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.087 [2024-11-06 14:11:45.340734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.087 [2024-11-06 14:11:45.340740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.087 [2024-11-06 14:11:45.340767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.087 [2024-11-06 14:11:45.350620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.087 [2024-11-06 14:11:45.350694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.087 [2024-11-06 14:11:45.350713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.087 [2024-11-06 14:11:45.350720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.087 [2024-11-06 14:11:45.350726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.087 [2024-11-06 14:11:45.350744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.087 [2024-11-06 14:11:45.360799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.087 [2024-11-06 14:11:45.360859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.087 [2024-11-06 14:11:45.360875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.087 [2024-11-06 14:11:45.360883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.087 [2024-11-06 14:11:45.360890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.087 [2024-11-06 14:11:45.360909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.349 [2024-11-06 14:11:45.370707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.349 [2024-11-06 14:11:45.370791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.349 [2024-11-06 14:11:45.370808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.349 [2024-11-06 14:11:45.370816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.349 [2024-11-06 14:11:45.370822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.349 [2024-11-06 14:11:45.370839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.349 qpair failed and we were unable to recover it. 00:29:59.349 [2024-11-06 14:11:45.380862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.349 [2024-11-06 14:11:45.380941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.349 [2024-11-06 14:11:45.380958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.349 [2024-11-06 14:11:45.380965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.349 [2024-11-06 14:11:45.380972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.349 [2024-11-06 14:11:45.380988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.349 qpair failed and we were unable to recover it. 00:29:59.349 [2024-11-06 14:11:45.390868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.349 [2024-11-06 14:11:45.390936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.349 [2024-11-06 14:11:45.390954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.349 [2024-11-06 14:11:45.390967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.349 [2024-11-06 14:11:45.390973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.349 [2024-11-06 14:11:45.390990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.349 qpair failed and we were unable to recover it. 00:29:59.349 [2024-11-06 14:11:45.400785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.349 [2024-11-06 14:11:45.400879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.349 [2024-11-06 14:11:45.400896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.349 [2024-11-06 14:11:45.400904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.349 [2024-11-06 14:11:45.400910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.349 [2024-11-06 14:11:45.400927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.349 qpair failed and we were unable to recover it. 00:29:59.349 [2024-11-06 14:11:45.410901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.349 [2024-11-06 14:11:45.410976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.349 [2024-11-06 14:11:45.410993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.349 [2024-11-06 14:11:45.411000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.349 [2024-11-06 14:11:45.411007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.349 [2024-11-06 14:11:45.411023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.349 qpair failed and we were unable to recover it. 00:29:59.349 [2024-11-06 14:11:45.421013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.349 [2024-11-06 14:11:45.421087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.349 [2024-11-06 14:11:45.421104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.349 [2024-11-06 14:11:45.421111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.349 [2024-11-06 14:11:45.421117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.350 [2024-11-06 14:11:45.421134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-06 14:11:45.431000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-06 14:11:45.431060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-06 14:11:45.431077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-06 14:11:45.431084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-06 14:11:45.431091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.350 [2024-11-06 14:11:45.431114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-06 14:11:45.440984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-06 14:11:45.441050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-06 14:11:45.441066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-06 14:11:45.441074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-06 14:11:45.441081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.350 [2024-11-06 14:11:45.441097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-06 14:11:45.451042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-06 14:11:45.451118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-06 14:11:45.451135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-06 14:11:45.451142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-06 14:11:45.451148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.350 [2024-11-06 14:11:45.451164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-06 14:11:45.461124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-06 14:11:45.461204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-06 14:11:45.461219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-06 14:11:45.461227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-06 14:11:45.461233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.350 [2024-11-06 14:11:45.461249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-06 14:11:45.471090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-06 14:11:45.471153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-06 14:11:45.471169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-06 14:11:45.471177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-06 14:11:45.471183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.350 [2024-11-06 14:11:45.471199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-06 14:11:45.481141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-06 14:11:45.481249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-06 14:11:45.481266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-06 14:11:45.481274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-06 14:11:45.481280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.350 [2024-11-06 14:11:45.481296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-06 14:11:45.491142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-06 14:11:45.491211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-06 14:11:45.491227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-06 14:11:45.491234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-06 14:11:45.491241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.350 [2024-11-06 14:11:45.491257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-06 14:11:45.501235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-06 14:11:45.501298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-06 14:11:45.501314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-06 14:11:45.501322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-06 14:11:45.501328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.350 [2024-11-06 14:11:45.501344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-06 14:11:45.511216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-06 14:11:45.511276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-06 14:11:45.511293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-06 14:11:45.511301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-06 14:11:45.511307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.350 [2024-11-06 14:11:45.511322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-06 14:11:45.521228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-06 14:11:45.521288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-06 14:11:45.521310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-06 14:11:45.521318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-06 14:11:45.521324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.350 [2024-11-06 14:11:45.521340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-06 14:11:45.531290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-06 14:11:45.531359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-06 14:11:45.531377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-06 14:11:45.531384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-06 14:11:45.531391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.350 [2024-11-06 14:11:45.531406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-06 14:11:45.541355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-06 14:11:45.541424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-06 14:11:45.541441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-06 14:11:45.541448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-06 14:11:45.541455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.350 [2024-11-06 14:11:45.541471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-06 14:11:45.551335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-06 14:11:45.551409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-06 14:11:45.551445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-06 14:11:45.551454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-06 14:11:45.551461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.351 [2024-11-06 14:11:45.551486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.351 qpair failed and we were unable to recover it. 00:29:59.351 [2024-11-06 14:11:45.561242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.351 [2024-11-06 14:11:45.561308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.351 [2024-11-06 14:11:45.561329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.351 [2024-11-06 14:11:45.561337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.351 [2024-11-06 14:11:45.561350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.351 [2024-11-06 14:11:45.561370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.351 qpair failed and we were unable to recover it. 00:29:59.351 [2024-11-06 14:11:45.571422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.351 [2024-11-06 14:11:45.571493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.351 [2024-11-06 14:11:45.571514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.351 [2024-11-06 14:11:45.571521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.351 [2024-11-06 14:11:45.571528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.351 [2024-11-06 14:11:45.571546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.351 qpair failed and we were unable to recover it. 00:29:59.351 [2024-11-06 14:11:45.581480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.351 [2024-11-06 14:11:45.581563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.351 [2024-11-06 14:11:45.581600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.351 [2024-11-06 14:11:45.581609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.351 [2024-11-06 14:11:45.581616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.351 [2024-11-06 14:11:45.581642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.351 qpair failed and we were unable to recover it. 00:29:59.351 [2024-11-06 14:11:45.591447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.351 [2024-11-06 14:11:45.591511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.351 [2024-11-06 14:11:45.591531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.351 [2024-11-06 14:11:45.591539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.351 [2024-11-06 14:11:45.591546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.351 [2024-11-06 14:11:45.591564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.351 qpair failed and we were unable to recover it. 00:29:59.351 [2024-11-06 14:11:45.601481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.351 [2024-11-06 14:11:45.601546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.351 [2024-11-06 14:11:45.601564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.351 [2024-11-06 14:11:45.601571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.351 [2024-11-06 14:11:45.601578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.351 [2024-11-06 14:11:45.601595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.351 qpair failed and we were unable to recover it. 00:29:59.351 [2024-11-06 14:11:45.611528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.351 [2024-11-06 14:11:45.611594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.351 [2024-11-06 14:11:45.611611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.351 [2024-11-06 14:11:45.611618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.351 [2024-11-06 14:11:45.611625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.351 [2024-11-06 14:11:45.611642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.351 qpair failed and we were unable to recover it. 00:29:59.351 [2024-11-06 14:11:45.621569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.351 [2024-11-06 14:11:45.621645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.351 [2024-11-06 14:11:45.621661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.351 [2024-11-06 14:11:45.621668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.351 [2024-11-06 14:11:45.621675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.351 [2024-11-06 14:11:45.621691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.351 qpair failed and we were unable to recover it. 00:29:59.615 [2024-11-06 14:11:45.631579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.615 [2024-11-06 14:11:45.631681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.615 [2024-11-06 14:11:45.631699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.615 [2024-11-06 14:11:45.631706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.615 [2024-11-06 14:11:45.631713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.615 [2024-11-06 14:11:45.631730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.615 qpair failed and we were unable to recover it. 00:29:59.615 [2024-11-06 14:11:45.641616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.615 [2024-11-06 14:11:45.641679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.615 [2024-11-06 14:11:45.641696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.615 [2024-11-06 14:11:45.641703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.615 [2024-11-06 14:11:45.641710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.615 [2024-11-06 14:11:45.641726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.615 qpair failed and we were unable to recover it. 00:29:59.615 [2024-11-06 14:11:45.651667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.615 [2024-11-06 14:11:45.651738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.615 [2024-11-06 14:11:45.651769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.615 [2024-11-06 14:11:45.651777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.615 [2024-11-06 14:11:45.651783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.615 [2024-11-06 14:11:45.651800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.615 qpair failed and we were unable to recover it. 00:29:59.615 [2024-11-06 14:11:45.661732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.615 [2024-11-06 14:11:45.661808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.615 [2024-11-06 14:11:45.661825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.615 [2024-11-06 14:11:45.661833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.615 [2024-11-06 14:11:45.661839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.615 [2024-11-06 14:11:45.661856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.615 qpair failed and we were unable to recover it. 00:29:59.615 [2024-11-06 14:11:45.671729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.615 [2024-11-06 14:11:45.671800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.615 [2024-11-06 14:11:45.671817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.615 [2024-11-06 14:11:45.671824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.615 [2024-11-06 14:11:45.671833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.615 [2024-11-06 14:11:45.671850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.615 qpair failed and we were unable to recover it. 00:29:59.615 [2024-11-06 14:11:45.681753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.615 [2024-11-06 14:11:45.681819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.615 [2024-11-06 14:11:45.681836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.615 [2024-11-06 14:11:45.681843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.615 [2024-11-06 14:11:45.681850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.615 [2024-11-06 14:11:45.681866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.615 qpair failed and we were unable to recover it. 00:29:59.615 [2024-11-06 14:11:45.691810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.615 [2024-11-06 14:11:45.691875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.615 [2024-11-06 14:11:45.691893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.615 [2024-11-06 14:11:45.691900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.615 [2024-11-06 14:11:45.691912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.615 [2024-11-06 14:11:45.691929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.615 qpair failed and we were unable to recover it. 00:29:59.615 [2024-11-06 14:11:45.701721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.615 [2024-11-06 14:11:45.701799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.615 [2024-11-06 14:11:45.701817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.615 [2024-11-06 14:11:45.701824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.615 [2024-11-06 14:11:45.701830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.615 [2024-11-06 14:11:45.701846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.615 qpair failed and we were unable to recover it. 00:29:59.615 [2024-11-06 14:11:45.711844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.615 [2024-11-06 14:11:45.711908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.615 [2024-11-06 14:11:45.711925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.616 [2024-11-06 14:11:45.711932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.616 [2024-11-06 14:11:45.711938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.616 [2024-11-06 14:11:45.711955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.616 qpair failed and we were unable to recover it. 00:29:59.616 [2024-11-06 14:11:45.721865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.616 [2024-11-06 14:11:45.721930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.616 [2024-11-06 14:11:45.721947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.616 [2024-11-06 14:11:45.721955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.616 [2024-11-06 14:11:45.721962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.616 [2024-11-06 14:11:45.721977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.616 qpair failed and we were unable to recover it. 00:29:59.616 [2024-11-06 14:11:45.731936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.616 [2024-11-06 14:11:45.732004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.616 [2024-11-06 14:11:45.732021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.616 [2024-11-06 14:11:45.732029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.616 [2024-11-06 14:11:45.732035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.616 [2024-11-06 14:11:45.732052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.616 qpair failed and we were unable to recover it. 00:29:59.616 [2024-11-06 14:11:45.741955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.616 [2024-11-06 14:11:45.742046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.616 [2024-11-06 14:11:45.742063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.616 [2024-11-06 14:11:45.742070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.616 [2024-11-06 14:11:45.742077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.616 [2024-11-06 14:11:45.742093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.616 qpair failed and we were unable to recover it. 00:29:59.616 [2024-11-06 14:11:45.751963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.616 [2024-11-06 14:11:45.752026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.616 [2024-11-06 14:11:45.752043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.616 [2024-11-06 14:11:45.752051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.616 [2024-11-06 14:11:45.752057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.616 [2024-11-06 14:11:45.752074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.616 qpair failed and we were unable to recover it. 00:29:59.616 [2024-11-06 14:11:45.761895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.616 [2024-11-06 14:11:45.761958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.616 [2024-11-06 14:11:45.761975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.616 [2024-11-06 14:11:45.761982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.616 [2024-11-06 14:11:45.761988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.616 [2024-11-06 14:11:45.762005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.616 qpair failed and we were unable to recover it. 00:29:59.616 [2024-11-06 14:11:45.771976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.616 [2024-11-06 14:11:45.772044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.616 [2024-11-06 14:11:45.772060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.616 [2024-11-06 14:11:45.772068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.616 [2024-11-06 14:11:45.772074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.616 [2024-11-06 14:11:45.772090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.616 qpair failed and we were unable to recover it. 00:29:59.616 [2024-11-06 14:11:45.782172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.616 [2024-11-06 14:11:45.782243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.616 [2024-11-06 14:11:45.782264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.616 [2024-11-06 14:11:45.782271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.616 [2024-11-06 14:11:45.782278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.616 [2024-11-06 14:11:45.782294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.616 qpair failed and we were unable to recover it. 00:29:59.616 [2024-11-06 14:11:45.792165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.616 [2024-11-06 14:11:45.792231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.616 [2024-11-06 14:11:45.792248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.616 [2024-11-06 14:11:45.792255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.616 [2024-11-06 14:11:45.792262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.616 [2024-11-06 14:11:45.792278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.616 qpair failed and we were unable to recover it. 00:29:59.616 [2024-11-06 14:11:45.802085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.616 [2024-11-06 14:11:45.802144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.616 [2024-11-06 14:11:45.802161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.616 [2024-11-06 14:11:45.802169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.616 [2024-11-06 14:11:45.802175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.616 [2024-11-06 14:11:45.802192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.616 qpair failed and we were unable to recover it. 00:29:59.616 [2024-11-06 14:11:45.812085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.616 [2024-11-06 14:11:45.812157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.616 [2024-11-06 14:11:45.812174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.616 [2024-11-06 14:11:45.812181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.616 [2024-11-06 14:11:45.812187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.616 [2024-11-06 14:11:45.812203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.616 qpair failed and we were unable to recover it. 00:29:59.616 [2024-11-06 14:11:45.822276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.616 [2024-11-06 14:11:45.822389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.616 [2024-11-06 14:11:45.822406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.616 [2024-11-06 14:11:45.822418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.616 [2024-11-06 14:11:45.822425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.616 [2024-11-06 14:11:45.822442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.616 qpair failed and we were unable to recover it. 00:29:59.616 [2024-11-06 14:11:45.832279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.616 [2024-11-06 14:11:45.832348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.616 [2024-11-06 14:11:45.832366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.616 [2024-11-06 14:11:45.832373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.616 [2024-11-06 14:11:45.832380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.616 [2024-11-06 14:11:45.832396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.616 qpair failed and we were unable to recover it. 00:29:59.616 [2024-11-06 14:11:45.842276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.616 [2024-11-06 14:11:45.842344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.616 [2024-11-06 14:11:45.842362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.616 [2024-11-06 14:11:45.842369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.617 [2024-11-06 14:11:45.842375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.617 [2024-11-06 14:11:45.842391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.617 qpair failed and we were unable to recover it. 00:29:59.617 [2024-11-06 14:11:45.852304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.617 [2024-11-06 14:11:45.852402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.617 [2024-11-06 14:11:45.852419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.617 [2024-11-06 14:11:45.852427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.617 [2024-11-06 14:11:45.852434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.617 [2024-11-06 14:11:45.852450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.617 qpair failed and we were unable to recover it. 00:29:59.617 [2024-11-06 14:11:45.862351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.617 [2024-11-06 14:11:45.862432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.617 [2024-11-06 14:11:45.862450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.617 [2024-11-06 14:11:45.862459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.617 [2024-11-06 14:11:45.862467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.617 [2024-11-06 14:11:45.862491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.617 qpair failed and we were unable to recover it. 00:29:59.617 [2024-11-06 14:11:45.872318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.617 [2024-11-06 14:11:45.872378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.617 [2024-11-06 14:11:45.872397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.617 [2024-11-06 14:11:45.872404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.617 [2024-11-06 14:11:45.872410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.617 [2024-11-06 14:11:45.872426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.617 qpair failed and we were unable to recover it. 00:29:59.617 [2024-11-06 14:11:45.882370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.617 [2024-11-06 14:11:45.882438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.617 [2024-11-06 14:11:45.882454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.617 [2024-11-06 14:11:45.882461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.617 [2024-11-06 14:11:45.882468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.617 [2024-11-06 14:11:45.882484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.617 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-06 14:11:45.892486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-06 14:11:45.892551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-06 14:11:45.892568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-06 14:11:45.892575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-06 14:11:45.892581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.880 [2024-11-06 14:11:45.892598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.880 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-06 14:11:45.902504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-06 14:11:45.902578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-06 14:11:45.902595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-06 14:11:45.902602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-06 14:11:45.902608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.880 [2024-11-06 14:11:45.902624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.880 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-06 14:11:45.912482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-06 14:11:45.912562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-06 14:11:45.912578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-06 14:11:45.912586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-06 14:11:45.912592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.880 [2024-11-06 14:11:45.912608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.880 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-06 14:11:45.922498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-06 14:11:45.922562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-06 14:11:45.922579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-06 14:11:45.922587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-06 14:11:45.922593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.880 [2024-11-06 14:11:45.922609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.880 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-06 14:11:45.932570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-06 14:11:45.932639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-06 14:11:45.932678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-06 14:11:45.932686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-06 14:11:45.932692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.880 [2024-11-06 14:11:45.932717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.880 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-06 14:11:45.942502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-06 14:11:45.942574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-06 14:11:45.942592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-06 14:11:45.942599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-06 14:11:45.942606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.880 [2024-11-06 14:11:45.942623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.880 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-06 14:11:45.952609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-06 14:11:45.952676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-06 14:11:45.952693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-06 14:11:45.952709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-06 14:11:45.952716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.880 [2024-11-06 14:11:45.952732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.880 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-06 14:11:45.962632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-06 14:11:45.962689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-06 14:11:45.962707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-06 14:11:45.962714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-06 14:11:45.962721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.880 [2024-11-06 14:11:45.962737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.880 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-06 14:11:45.972559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-06 14:11:45.972626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-06 14:11:45.972647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-06 14:11:45.972655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-06 14:11:45.972661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.881 [2024-11-06 14:11:45.972680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-06 14:11:45.982762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-06 14:11:45.982835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-06 14:11:45.982855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-06 14:11:45.982862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-06 14:11:45.982868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.881 [2024-11-06 14:11:45.982885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-06 14:11:45.992790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-06 14:11:45.992891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-06 14:11:45.992909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-06 14:11:45.992916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-06 14:11:45.992922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.881 [2024-11-06 14:11:45.992945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-06 14:11:46.002720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-06 14:11:46.002795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-06 14:11:46.002812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-06 14:11:46.002820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-06 14:11:46.002827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.881 [2024-11-06 14:11:46.002843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-06 14:11:46.012797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-06 14:11:46.012864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-06 14:11:46.012881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-06 14:11:46.012888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-06 14:11:46.012894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.881 [2024-11-06 14:11:46.012911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-06 14:11:46.022919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-06 14:11:46.023014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-06 14:11:46.023031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-06 14:11:46.023038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-06 14:11:46.023044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.881 [2024-11-06 14:11:46.023060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-06 14:11:46.032862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-06 14:11:46.032921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-06 14:11:46.032938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-06 14:11:46.032945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-06 14:11:46.032952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.881 [2024-11-06 14:11:46.032969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-06 14:11:46.042884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-06 14:11:46.042991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-06 14:11:46.043008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-06 14:11:46.043016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-06 14:11:46.043022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.881 [2024-11-06 14:11:46.043038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-06 14:11:46.052930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-06 14:11:46.053016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-06 14:11:46.053032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-06 14:11:46.053040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-06 14:11:46.053046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.881 [2024-11-06 14:11:46.053062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-06 14:11:46.062994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-06 14:11:46.063076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-06 14:11:46.063093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-06 14:11:46.063100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-06 14:11:46.063107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.881 [2024-11-06 14:11:46.063123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-06 14:11:46.073005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-06 14:11:46.073073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-06 14:11:46.073090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-06 14:11:46.073097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-06 14:11:46.073103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.881 [2024-11-06 14:11:46.073120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-06 14:11:46.083049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-06 14:11:46.083114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-06 14:11:46.083136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-06 14:11:46.083143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-06 14:11:46.083149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.881 [2024-11-06 14:11:46.083165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-06 14:11:46.093078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-06 14:11:46.093156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-06 14:11:46.093172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-06 14:11:46.093179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-06 14:11:46.093185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.881 [2024-11-06 14:11:46.093201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-06 14:11:46.103148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-06 14:11:46.103220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-06 14:11:46.103238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-06 14:11:46.103245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-06 14:11:46.103252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.882 [2024-11-06 14:11:46.103268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.882 qpair failed and we were unable to recover it. 00:29:59.882 [2024-11-06 14:11:46.113113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.882 [2024-11-06 14:11:46.113185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.882 [2024-11-06 14:11:46.113203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.882 [2024-11-06 14:11:46.113210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.882 [2024-11-06 14:11:46.113217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.882 [2024-11-06 14:11:46.113232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.882 qpair failed and we were unable to recover it. 00:29:59.882 [2024-11-06 14:11:46.123155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.882 [2024-11-06 14:11:46.123217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.882 [2024-11-06 14:11:46.123234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.882 [2024-11-06 14:11:46.123241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.882 [2024-11-06 14:11:46.123253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.882 [2024-11-06 14:11:46.123269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.882 qpair failed and we were unable to recover it. 00:29:59.882 [2024-11-06 14:11:46.133199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.882 [2024-11-06 14:11:46.133304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.882 [2024-11-06 14:11:46.133321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.882 [2024-11-06 14:11:46.133329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.882 [2024-11-06 14:11:46.133336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.882 [2024-11-06 14:11:46.133351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.882 qpair failed and we were unable to recover it. 00:29:59.882 [2024-11-06 14:11:46.143280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.882 [2024-11-06 14:11:46.143360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.882 [2024-11-06 14:11:46.143377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.882 [2024-11-06 14:11:46.143384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.882 [2024-11-06 14:11:46.143391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.882 [2024-11-06 14:11:46.143407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.882 qpair failed and we were unable to recover it. 00:29:59.882 [2024-11-06 14:11:46.153316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.882 [2024-11-06 14:11:46.153379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.882 [2024-11-06 14:11:46.153396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.882 [2024-11-06 14:11:46.153403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.882 [2024-11-06 14:11:46.153409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:29:59.882 [2024-11-06 14:11:46.153426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.882 qpair failed and we were unable to recover it. 00:30:00.145 [2024-11-06 14:11:46.163188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.145 [2024-11-06 14:11:46.163257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.145 [2024-11-06 14:11:46.163274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.145 [2024-11-06 14:11:46.163281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.145 [2024-11-06 14:11:46.163288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.145 [2024-11-06 14:11:46.163303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.145 qpair failed and we were unable to recover it. 00:30:00.145 [2024-11-06 14:11:46.173321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.145 [2024-11-06 14:11:46.173388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.145 [2024-11-06 14:11:46.173404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.145 [2024-11-06 14:11:46.173412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.145 [2024-11-06 14:11:46.173418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.145 [2024-11-06 14:11:46.173434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.145 qpair failed and we were unable to recover it. 00:30:00.145 [2024-11-06 14:11:46.183416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.145 [2024-11-06 14:11:46.183487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.145 [2024-11-06 14:11:46.183503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.145 [2024-11-06 14:11:46.183511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.145 [2024-11-06 14:11:46.183517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.145 [2024-11-06 14:11:46.183534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.145 qpair failed and we were unable to recover it. 00:30:00.145 [2024-11-06 14:11:46.193396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.145 [2024-11-06 14:11:46.193476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.145 [2024-11-06 14:11:46.193493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.145 [2024-11-06 14:11:46.193501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.145 [2024-11-06 14:11:46.193508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.145 [2024-11-06 14:11:46.193523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.145 qpair failed and we were unable to recover it. 00:30:00.145 [2024-11-06 14:11:46.203309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.145 [2024-11-06 14:11:46.203367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.145 [2024-11-06 14:11:46.203388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.145 [2024-11-06 14:11:46.203396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.145 [2024-11-06 14:11:46.203402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.145 [2024-11-06 14:11:46.203420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.145 qpair failed and we were unable to recover it. 00:30:00.145 [2024-11-06 14:11:46.213336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.145 [2024-11-06 14:11:46.213423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.145 [2024-11-06 14:11:46.213446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.145 [2024-11-06 14:11:46.213454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.145 [2024-11-06 14:11:46.213461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.145 [2024-11-06 14:11:46.213478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.145 qpair failed and we were unable to recover it. 00:30:00.145 [2024-11-06 14:11:46.223512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.145 [2024-11-06 14:11:46.223584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.145 [2024-11-06 14:11:46.223601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.145 [2024-11-06 14:11:46.223609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.145 [2024-11-06 14:11:46.223616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.145 [2024-11-06 14:11:46.223632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.145 qpair failed and we were unable to recover it. 00:30:00.145 [2024-11-06 14:11:46.233487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.145 [2024-11-06 14:11:46.233542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.146 [2024-11-06 14:11:46.233559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.146 [2024-11-06 14:11:46.233566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.146 [2024-11-06 14:11:46.233573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.146 [2024-11-06 14:11:46.233589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.146 qpair failed and we were unable to recover it. 00:30:00.146 [2024-11-06 14:11:46.243561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.146 [2024-11-06 14:11:46.243617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.146 [2024-11-06 14:11:46.243633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.146 [2024-11-06 14:11:46.243640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.146 [2024-11-06 14:11:46.243646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.146 [2024-11-06 14:11:46.243663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.146 qpair failed and we were unable to recover it. 00:30:00.146 [2024-11-06 14:11:46.253593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.146 [2024-11-06 14:11:46.253660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.146 [2024-11-06 14:11:46.253676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.146 [2024-11-06 14:11:46.253684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.146 [2024-11-06 14:11:46.253696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.146 [2024-11-06 14:11:46.253712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.146 qpair failed and we were unable to recover it. 00:30:00.146 [2024-11-06 14:11:46.263628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.146 [2024-11-06 14:11:46.263701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.146 [2024-11-06 14:11:46.263717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.146 [2024-11-06 14:11:46.263725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.146 [2024-11-06 14:11:46.263731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.146 [2024-11-06 14:11:46.263752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.146 qpair failed and we were unable to recover it. 00:30:00.146 [2024-11-06 14:11:46.273634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.146 [2024-11-06 14:11:46.273704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.146 [2024-11-06 14:11:46.273721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.146 [2024-11-06 14:11:46.273728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.146 [2024-11-06 14:11:46.273734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.146 [2024-11-06 14:11:46.273755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.146 qpair failed and we were unable to recover it. 00:30:00.146 [2024-11-06 14:11:46.283526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.146 [2024-11-06 14:11:46.283591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.146 [2024-11-06 14:11:46.283607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.146 [2024-11-06 14:11:46.283614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.146 [2024-11-06 14:11:46.283621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.146 [2024-11-06 14:11:46.283637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.146 qpair failed and we were unable to recover it. 00:30:00.146 [2024-11-06 14:11:46.293718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.146 [2024-11-06 14:11:46.293798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.146 [2024-11-06 14:11:46.293815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.146 [2024-11-06 14:11:46.293822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.146 [2024-11-06 14:11:46.293829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.146 [2024-11-06 14:11:46.293845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.146 qpair failed and we were unable to recover it. 00:30:00.146 [2024-11-06 14:11:46.303651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.146 [2024-11-06 14:11:46.303738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.146 [2024-11-06 14:11:46.303759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.146 [2024-11-06 14:11:46.303767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.146 [2024-11-06 14:11:46.303773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.146 [2024-11-06 14:11:46.303790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.146 qpair failed and we were unable to recover it. 00:30:00.146 [2024-11-06 14:11:46.313776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.146 [2024-11-06 14:11:46.313888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.146 [2024-11-06 14:11:46.313904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.146 [2024-11-06 14:11:46.313911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.146 [2024-11-06 14:11:46.313918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.146 [2024-11-06 14:11:46.313933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.146 qpair failed and we were unable to recover it. 00:30:00.146 [2024-11-06 14:11:46.323648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.146 [2024-11-06 14:11:46.323721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.146 [2024-11-06 14:11:46.323740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.146 [2024-11-06 14:11:46.323752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.146 [2024-11-06 14:11:46.323759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.146 [2024-11-06 14:11:46.323782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.146 qpair failed and we were unable to recover it. 00:30:00.146 [2024-11-06 14:11:46.333731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.146 [2024-11-06 14:11:46.333803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.146 [2024-11-06 14:11:46.333821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.146 [2024-11-06 14:11:46.333828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.146 [2024-11-06 14:11:46.333835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.146 [2024-11-06 14:11:46.333851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.146 qpair failed and we were unable to recover it. 00:30:00.146 [2024-11-06 14:11:46.343883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.146 [2024-11-06 14:11:46.343972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.146 [2024-11-06 14:11:46.343989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.146 [2024-11-06 14:11:46.343997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.146 [2024-11-06 14:11:46.344003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.146 [2024-11-06 14:11:46.344019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.146 qpair failed and we were unable to recover it. 00:30:00.146 [2024-11-06 14:11:46.353908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.146 [2024-11-06 14:11:46.353979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.146 [2024-11-06 14:11:46.353996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.146 [2024-11-06 14:11:46.354004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.146 [2024-11-06 14:11:46.354010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.146 [2024-11-06 14:11:46.354026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.146 qpair failed and we were unable to recover it. 00:30:00.146 [2024-11-06 14:11:46.363970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.146 [2024-11-06 14:11:46.364059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.146 [2024-11-06 14:11:46.364077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.147 [2024-11-06 14:11:46.364085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.147 [2024-11-06 14:11:46.364091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.147 [2024-11-06 14:11:46.364107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.147 qpair failed and we were unable to recover it. 00:30:00.147 [2024-11-06 14:11:46.374009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.147 [2024-11-06 14:11:46.374107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.147 [2024-11-06 14:11:46.374126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.147 [2024-11-06 14:11:46.374133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.147 [2024-11-06 14:11:46.374142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.147 [2024-11-06 14:11:46.374158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.147 qpair failed and we were unable to recover it. 00:30:00.147 [2024-11-06 14:11:46.383887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.147 [2024-11-06 14:11:46.383974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.147 [2024-11-06 14:11:46.383991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.147 [2024-11-06 14:11:46.384006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.147 [2024-11-06 14:11:46.384012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.147 [2024-11-06 14:11:46.384029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.147 qpair failed and we were unable to recover it. 00:30:00.147 [2024-11-06 14:11:46.393898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.147 [2024-11-06 14:11:46.393959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.147 [2024-11-06 14:11:46.393975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.147 [2024-11-06 14:11:46.393982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.147 [2024-11-06 14:11:46.393989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.147 [2024-11-06 14:11:46.394004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.147 qpair failed and we were unable to recover it. 00:30:00.147 [2024-11-06 14:11:46.404019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.147 [2024-11-06 14:11:46.404144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.147 [2024-11-06 14:11:46.404161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.147 [2024-11-06 14:11:46.404168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.147 [2024-11-06 14:11:46.404175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.147 [2024-11-06 14:11:46.404191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.147 qpair failed and we were unable to recover it. 00:30:00.147 [2024-11-06 14:11:46.414118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.147 [2024-11-06 14:11:46.414183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.147 [2024-11-06 14:11:46.414199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.147 [2024-11-06 14:11:46.414206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.147 [2024-11-06 14:11:46.414213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.147 [2024-11-06 14:11:46.414228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.147 qpair failed and we were unable to recover it. 00:30:00.410 [2024-11-06 14:11:46.424163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.410 [2024-11-06 14:11:46.424254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.410 [2024-11-06 14:11:46.424270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.410 [2024-11-06 14:11:46.424278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.410 [2024-11-06 14:11:46.424284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.410 [2024-11-06 14:11:46.424306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.410 qpair failed and we were unable to recover it. 00:30:00.410 [2024-11-06 14:11:46.434098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.410 [2024-11-06 14:11:46.434154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.410 [2024-11-06 14:11:46.434170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.410 [2024-11-06 14:11:46.434178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.410 [2024-11-06 14:11:46.434184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.410 [2024-11-06 14:11:46.434200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.410 qpair failed and we were unable to recover it. 00:30:00.410 [2024-11-06 14:11:46.444116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.410 [2024-11-06 14:11:46.444168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.410 [2024-11-06 14:11:46.444183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.410 [2024-11-06 14:11:46.444190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.410 [2024-11-06 14:11:46.444196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.410 [2024-11-06 14:11:46.444211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.410 qpair failed and we were unable to recover it. 00:30:00.410 [2024-11-06 14:11:46.454172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.410 [2024-11-06 14:11:46.454232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.410 [2024-11-06 14:11:46.454246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.410 [2024-11-06 14:11:46.454254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.410 [2024-11-06 14:11:46.454260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.410 [2024-11-06 14:11:46.454275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.410 qpair failed and we were unable to recover it. 00:30:00.410 [2024-11-06 14:11:46.464212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.410 [2024-11-06 14:11:46.464277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.410 [2024-11-06 14:11:46.464292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.410 [2024-11-06 14:11:46.464299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.410 [2024-11-06 14:11:46.464305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.410 [2024-11-06 14:11:46.464320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.410 qpair failed and we were unable to recover it. 00:30:00.410 [2024-11-06 14:11:46.474156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.410 [2024-11-06 14:11:46.474223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.410 [2024-11-06 14:11:46.474239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.410 [2024-11-06 14:11:46.474246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.410 [2024-11-06 14:11:46.474252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.410 [2024-11-06 14:11:46.474267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.410 qpair failed and we were unable to recover it. 00:30:00.410 [2024-11-06 14:11:46.484208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.410 [2024-11-06 14:11:46.484264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.410 [2024-11-06 14:11:46.484278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.410 [2024-11-06 14:11:46.484286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.410 [2024-11-06 14:11:46.484292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.410 [2024-11-06 14:11:46.484306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.410 qpair failed and we were unable to recover it. 00:30:00.410 [2024-11-06 14:11:46.494284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.410 [2024-11-06 14:11:46.494353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.410 [2024-11-06 14:11:46.494367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.410 [2024-11-06 14:11:46.494375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.410 [2024-11-06 14:11:46.494381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.410 [2024-11-06 14:11:46.494395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.410 qpair failed and we were unable to recover it. 00:30:00.410 [2024-11-06 14:11:46.504315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.410 [2024-11-06 14:11:46.504378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.410 [2024-11-06 14:11:46.504392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.410 [2024-11-06 14:11:46.504398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.410 [2024-11-06 14:11:46.504405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.410 [2024-11-06 14:11:46.504419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.410 qpair failed and we were unable to recover it. 00:30:00.410 [2024-11-06 14:11:46.514248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.410 [2024-11-06 14:11:46.514300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.410 [2024-11-06 14:11:46.514314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.410 [2024-11-06 14:11:46.514325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.410 [2024-11-06 14:11:46.514332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.410 [2024-11-06 14:11:46.514346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.410 qpair failed and we were unable to recover it. 00:30:00.410 [2024-11-06 14:11:46.524298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.410 [2024-11-06 14:11:46.524343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.410 [2024-11-06 14:11:46.524357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.410 [2024-11-06 14:11:46.524364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.410 [2024-11-06 14:11:46.524370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.410 [2024-11-06 14:11:46.524384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.410 qpair failed and we were unable to recover it. 00:30:00.410 [2024-11-06 14:11:46.534352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.410 [2024-11-06 14:11:46.534429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.410 [2024-11-06 14:11:46.534443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.410 [2024-11-06 14:11:46.534450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.410 [2024-11-06 14:11:46.534456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.410 [2024-11-06 14:11:46.534470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.410 qpair failed and we were unable to recover it. 00:30:00.410 [2024-11-06 14:11:46.544413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.410 [2024-11-06 14:11:46.544470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.410 [2024-11-06 14:11:46.544484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.410 [2024-11-06 14:11:46.544491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.410 [2024-11-06 14:11:46.544498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.410 [2024-11-06 14:11:46.544512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.410 qpair failed and we were unable to recover it. 00:30:00.410 [2024-11-06 14:11:46.554373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.410 [2024-11-06 14:11:46.554416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.410 [2024-11-06 14:11:46.554430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.410 [2024-11-06 14:11:46.554437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.410 [2024-11-06 14:11:46.554443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.410 [2024-11-06 14:11:46.554461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.410 qpair failed and we were unable to recover it. 00:30:00.410 [2024-11-06 14:11:46.564406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.410 [2024-11-06 14:11:46.564493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.410 [2024-11-06 14:11:46.564507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.410 [2024-11-06 14:11:46.564514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.410 [2024-11-06 14:11:46.564520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.410 [2024-11-06 14:11:46.564534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.410 qpair failed and we were unable to recover it. 00:30:00.410 [2024-11-06 14:11:46.574474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.410 [2024-11-06 14:11:46.574530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.410 [2024-11-06 14:11:46.574543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.410 [2024-11-06 14:11:46.574550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.410 [2024-11-06 14:11:46.574557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.410 [2024-11-06 14:11:46.574571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.410 qpair failed and we were unable to recover it. 00:30:00.410 [2024-11-06 14:11:46.584460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.410 [2024-11-06 14:11:46.584546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.410 [2024-11-06 14:11:46.584560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.410 [2024-11-06 14:11:46.584567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.410 [2024-11-06 14:11:46.584573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.410 [2024-11-06 14:11:46.584587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.410 qpair failed and we were unable to recover it. 00:30:00.410 [2024-11-06 14:11:46.594480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.411 [2024-11-06 14:11:46.594525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.411 [2024-11-06 14:11:46.594538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.411 [2024-11-06 14:11:46.594545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.411 [2024-11-06 14:11:46.594551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.411 [2024-11-06 14:11:46.594565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.411 qpair failed and we were unable to recover it. 00:30:00.411 [2024-11-06 14:11:46.604499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.411 [2024-11-06 14:11:46.604584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.411 [2024-11-06 14:11:46.604597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.411 [2024-11-06 14:11:46.604604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.411 [2024-11-06 14:11:46.604610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.411 [2024-11-06 14:11:46.604625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.411 qpair failed and we were unable to recover it. 00:30:00.411 [2024-11-06 14:11:46.614397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.411 [2024-11-06 14:11:46.614444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.411 [2024-11-06 14:11:46.614458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.411 [2024-11-06 14:11:46.614465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.411 [2024-11-06 14:11:46.614471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.411 [2024-11-06 14:11:46.614486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.411 qpair failed and we were unable to recover it. 00:30:00.411 [2024-11-06 14:11:46.624428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.411 [2024-11-06 14:11:46.624478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.411 [2024-11-06 14:11:46.624491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.411 [2024-11-06 14:11:46.624498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.411 [2024-11-06 14:11:46.624504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.411 [2024-11-06 14:11:46.624519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.411 qpair failed and we were unable to recover it. 00:30:00.411 [2024-11-06 14:11:46.634585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.411 [2024-11-06 14:11:46.634631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.411 [2024-11-06 14:11:46.634644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.411 [2024-11-06 14:11:46.634651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.411 [2024-11-06 14:11:46.634658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.411 [2024-11-06 14:11:46.634672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.411 qpair failed and we were unable to recover it. 00:30:00.411 [2024-11-06 14:11:46.644595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.411 [2024-11-06 14:11:46.644639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.411 [2024-11-06 14:11:46.644656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.411 [2024-11-06 14:11:46.644663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.411 [2024-11-06 14:11:46.644669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.411 [2024-11-06 14:11:46.644683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.411 qpair failed and we were unable to recover it. 00:30:00.411 [2024-11-06 14:11:46.654662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.411 [2024-11-06 14:11:46.654708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.411 [2024-11-06 14:11:46.654721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.411 [2024-11-06 14:11:46.654728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.411 [2024-11-06 14:11:46.654735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.411 [2024-11-06 14:11:46.654752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.411 qpair failed and we were unable to recover it. 00:30:00.411 [2024-11-06 14:11:46.664675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.411 [2024-11-06 14:11:46.664725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.411 [2024-11-06 14:11:46.664737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.411 [2024-11-06 14:11:46.664747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.411 [2024-11-06 14:11:46.664754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.411 [2024-11-06 14:11:46.664768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.411 qpair failed and we were unable to recover it. 00:30:00.411 [2024-11-06 14:11:46.674679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.411 [2024-11-06 14:11:46.674724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.411 [2024-11-06 14:11:46.674736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.411 [2024-11-06 14:11:46.674743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.411 [2024-11-06 14:11:46.674753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.411 [2024-11-06 14:11:46.674767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.411 qpair failed and we were unable to recover it. 00:30:00.411 [2024-11-06 14:11:46.684706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.411 [2024-11-06 14:11:46.684751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.411 [2024-11-06 14:11:46.684765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.411 [2024-11-06 14:11:46.684772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.411 [2024-11-06 14:11:46.684781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.411 [2024-11-06 14:11:46.684795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.411 qpair failed and we were unable to recover it. 00:30:00.676 [2024-11-06 14:11:46.694615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.676 [2024-11-06 14:11:46.694658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.676 [2024-11-06 14:11:46.694671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.676 [2024-11-06 14:11:46.694678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.676 [2024-11-06 14:11:46.694685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.676 [2024-11-06 14:11:46.694704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.676 qpair failed and we were unable to recover it. 00:30:00.676 [2024-11-06 14:11:46.704647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.676 [2024-11-06 14:11:46.704699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.676 [2024-11-06 14:11:46.704712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.676 [2024-11-06 14:11:46.704719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.676 [2024-11-06 14:11:46.704725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.676 [2024-11-06 14:11:46.704739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.676 qpair failed and we were unable to recover it. 00:30:00.676 [2024-11-06 14:11:46.714687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.676 [2024-11-06 14:11:46.714730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.676 [2024-11-06 14:11:46.714743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.676 [2024-11-06 14:11:46.714753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.676 [2024-11-06 14:11:46.714759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.676 [2024-11-06 14:11:46.714773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.676 qpair failed and we were unable to recover it. 00:30:00.676 [2024-11-06 14:11:46.724680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.676 [2024-11-06 14:11:46.724722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.676 [2024-11-06 14:11:46.724734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.676 [2024-11-06 14:11:46.724741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.676 [2024-11-06 14:11:46.724751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.676 [2024-11-06 14:11:46.724765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.676 qpair failed and we were unable to recover it. 00:30:00.676 [2024-11-06 14:11:46.734905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.676 [2024-11-06 14:11:46.734963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.676 [2024-11-06 14:11:46.734977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.676 [2024-11-06 14:11:46.734984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.676 [2024-11-06 14:11:46.734990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.676 [2024-11-06 14:11:46.735004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.676 qpair failed and we were unable to recover it. 00:30:00.676 [2024-11-06 14:11:46.744904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.676 [2024-11-06 14:11:46.744980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.676 [2024-11-06 14:11:46.744993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.676 [2024-11-06 14:11:46.744999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.676 [2024-11-06 14:11:46.745006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.676 [2024-11-06 14:11:46.745019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.676 qpair failed and we were unable to recover it. 00:30:00.676 [2024-11-06 14:11:46.754939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.676 [2024-11-06 14:11:46.755012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.676 [2024-11-06 14:11:46.755024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.676 [2024-11-06 14:11:46.755031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.676 [2024-11-06 14:11:46.755038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.676 [2024-11-06 14:11:46.755051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.676 qpair failed and we were unable to recover it. 00:30:00.676 [2024-11-06 14:11:46.764924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.676 [2024-11-06 14:11:46.764970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.676 [2024-11-06 14:11:46.764983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.676 [2024-11-06 14:11:46.764989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.676 [2024-11-06 14:11:46.764995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.676 [2024-11-06 14:11:46.765009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.676 qpair failed and we were unable to recover it. 00:30:00.676 [2024-11-06 14:11:46.774940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.676 [2024-11-06 14:11:46.774984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.676 [2024-11-06 14:11:46.775004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.676 [2024-11-06 14:11:46.775012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.676 [2024-11-06 14:11:46.775018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.676 [2024-11-06 14:11:46.775031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.676 qpair failed and we were unable to recover it. 00:30:00.676 [2024-11-06 14:11:46.784986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.676 [2024-11-06 14:11:46.785078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.676 [2024-11-06 14:11:46.785091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.676 [2024-11-06 14:11:46.785098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.676 [2024-11-06 14:11:46.785104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.676 [2024-11-06 14:11:46.785118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.676 qpair failed and we were unable to recover it. 00:30:00.676 [2024-11-06 14:11:46.794994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.676 [2024-11-06 14:11:46.795042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.676 [2024-11-06 14:11:46.795055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.676 [2024-11-06 14:11:46.795061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.676 [2024-11-06 14:11:46.795068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.676 [2024-11-06 14:11:46.795081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.676 qpair failed and we were unable to recover it. 00:30:00.676 [2024-11-06 14:11:46.805028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.676 [2024-11-06 14:11:46.805073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.676 [2024-11-06 14:11:46.805087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.676 [2024-11-06 14:11:46.805094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.676 [2024-11-06 14:11:46.805100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.677 [2024-11-06 14:11:46.805113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.677 qpair failed and we were unable to recover it. 00:30:00.677 [2024-11-06 14:11:46.815077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.677 [2024-11-06 14:11:46.815122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.677 [2024-11-06 14:11:46.815135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.677 [2024-11-06 14:11:46.815142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.677 [2024-11-06 14:11:46.815152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.677 [2024-11-06 14:11:46.815166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.677 qpair failed and we were unable to recover it. 00:30:00.677 [2024-11-06 14:11:46.825115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.677 [2024-11-06 14:11:46.825190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.677 [2024-11-06 14:11:46.825203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.677 [2024-11-06 14:11:46.825210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.677 [2024-11-06 14:11:46.825216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.677 [2024-11-06 14:11:46.825229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.677 qpair failed and we were unable to recover it. 00:30:00.677 [2024-11-06 14:11:46.835113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.677 [2024-11-06 14:11:46.835153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.677 [2024-11-06 14:11:46.835166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.677 [2024-11-06 14:11:46.835173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.677 [2024-11-06 14:11:46.835179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.677 [2024-11-06 14:11:46.835193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.677 qpair failed and we were unable to recover it. 00:30:00.677 [2024-11-06 14:11:46.845110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.677 [2024-11-06 14:11:46.845155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.677 [2024-11-06 14:11:46.845168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.677 [2024-11-06 14:11:46.845174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.677 [2024-11-06 14:11:46.845181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.677 [2024-11-06 14:11:46.845194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.677 qpair failed and we were unable to recover it. 00:30:00.677 [2024-11-06 14:11:46.855175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.677 [2024-11-06 14:11:46.855220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.677 [2024-11-06 14:11:46.855232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.677 [2024-11-06 14:11:46.855240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.677 [2024-11-06 14:11:46.855246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.677 [2024-11-06 14:11:46.855260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.677 qpair failed and we were unable to recover it. 00:30:00.677 [2024-11-06 14:11:46.865263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.677 [2024-11-06 14:11:46.865332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.677 [2024-11-06 14:11:46.865345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.677 [2024-11-06 14:11:46.865353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.677 [2024-11-06 14:11:46.865359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.677 [2024-11-06 14:11:46.865372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.677 qpair failed and we were unable to recover it. 00:30:00.677 [2024-11-06 14:11:46.875226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.677 [2024-11-06 14:11:46.875267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.677 [2024-11-06 14:11:46.875280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.677 [2024-11-06 14:11:46.875287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.677 [2024-11-06 14:11:46.875294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.677 [2024-11-06 14:11:46.875307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.677 qpair failed and we were unable to recover it. 00:30:00.677 [2024-11-06 14:11:46.885232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.677 [2024-11-06 14:11:46.885277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.677 [2024-11-06 14:11:46.885290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.677 [2024-11-06 14:11:46.885297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.677 [2024-11-06 14:11:46.885303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.677 [2024-11-06 14:11:46.885316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.677 qpair failed and we were unable to recover it. 00:30:00.677 [2024-11-06 14:11:46.895274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.677 [2024-11-06 14:11:46.895318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.677 [2024-11-06 14:11:46.895331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.677 [2024-11-06 14:11:46.895338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.677 [2024-11-06 14:11:46.895344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.677 [2024-11-06 14:11:46.895358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.677 qpair failed and we were unable to recover it. 00:30:00.677 [2024-11-06 14:11:46.905356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.677 [2024-11-06 14:11:46.905427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.677 [2024-11-06 14:11:46.905440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.677 [2024-11-06 14:11:46.905447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.677 [2024-11-06 14:11:46.905453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.677 [2024-11-06 14:11:46.905467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.677 qpair failed and we were unable to recover it. 00:30:00.677 [2024-11-06 14:11:46.915306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.677 [2024-11-06 14:11:46.915347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.677 [2024-11-06 14:11:46.915359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.677 [2024-11-06 14:11:46.915366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.677 [2024-11-06 14:11:46.915373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.677 [2024-11-06 14:11:46.915386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.677 qpair failed and we were unable to recover it. 00:30:00.677 [2024-11-06 14:11:46.925340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.677 [2024-11-06 14:11:46.925381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.677 [2024-11-06 14:11:46.925394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.677 [2024-11-06 14:11:46.925401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.677 [2024-11-06 14:11:46.925407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.677 [2024-11-06 14:11:46.925421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.677 qpair failed and we were unable to recover it. 00:30:00.677 [2024-11-06 14:11:46.935375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.677 [2024-11-06 14:11:46.935424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.677 [2024-11-06 14:11:46.935439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.677 [2024-11-06 14:11:46.935446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.677 [2024-11-06 14:11:46.935453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.678 [2024-11-06 14:11:46.935472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.678 qpair failed and we were unable to recover it. 00:30:00.678 [2024-11-06 14:11:46.945408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.678 [2024-11-06 14:11:46.945458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.678 [2024-11-06 14:11:46.945472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.678 [2024-11-06 14:11:46.945482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.678 [2024-11-06 14:11:46.945488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.678 [2024-11-06 14:11:46.945502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.678 qpair failed and we were unable to recover it. 00:30:00.941 [2024-11-06 14:11:46.955462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.941 [2024-11-06 14:11:46.955543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.941 [2024-11-06 14:11:46.955556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.941 [2024-11-06 14:11:46.955563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.941 [2024-11-06 14:11:46.955569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.941 [2024-11-06 14:11:46.955583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.941 qpair failed and we were unable to recover it. 00:30:00.941 [2024-11-06 14:11:46.965443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.941 [2024-11-06 14:11:46.965529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.941 [2024-11-06 14:11:46.965542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.941 [2024-11-06 14:11:46.965550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.941 [2024-11-06 14:11:46.965556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.941 [2024-11-06 14:11:46.965570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.941 qpair failed and we were unable to recover it. 00:30:00.941 [2024-11-06 14:11:46.975493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.941 [2024-11-06 14:11:46.975543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.941 [2024-11-06 14:11:46.975556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.941 [2024-11-06 14:11:46.975563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.941 [2024-11-06 14:11:46.975569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.941 [2024-11-06 14:11:46.975582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.941 qpair failed and we were unable to recover it. 00:30:00.941 [2024-11-06 14:11:46.985513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.941 [2024-11-06 14:11:46.985561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.941 [2024-11-06 14:11:46.985574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.941 [2024-11-06 14:11:46.985580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.941 [2024-11-06 14:11:46.985587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.941 [2024-11-06 14:11:46.985604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.941 qpair failed and we were unable to recover it. 00:30:00.941 [2024-11-06 14:11:46.995538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.941 [2024-11-06 14:11:46.995581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.941 [2024-11-06 14:11:46.995594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.941 [2024-11-06 14:11:46.995601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.941 [2024-11-06 14:11:46.995607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.941 [2024-11-06 14:11:46.995621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.941 qpair failed and we were unable to recover it. 00:30:00.941 [2024-11-06 14:11:47.005555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.941 [2024-11-06 14:11:47.005606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.941 [2024-11-06 14:11:47.005619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.941 [2024-11-06 14:11:47.005626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.941 [2024-11-06 14:11:47.005635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.941 [2024-11-06 14:11:47.005649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.941 qpair failed and we were unable to recover it. 00:30:00.941 [2024-11-06 14:11:47.015566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.941 [2024-11-06 14:11:47.015610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.941 [2024-11-06 14:11:47.015623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.941 [2024-11-06 14:11:47.015630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.941 [2024-11-06 14:11:47.015636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.941 [2024-11-06 14:11:47.015649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.941 qpair failed and we were unable to recover it. 00:30:00.941 [2024-11-06 14:11:47.025627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.941 [2024-11-06 14:11:47.025681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.941 [2024-11-06 14:11:47.025694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.941 [2024-11-06 14:11:47.025701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.941 [2024-11-06 14:11:47.025707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.941 [2024-11-06 14:11:47.025721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.941 qpair failed and we were unable to recover it. 00:30:00.941 [2024-11-06 14:11:47.035519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.941 [2024-11-06 14:11:47.035564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.941 [2024-11-06 14:11:47.035578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.941 [2024-11-06 14:11:47.035586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.941 [2024-11-06 14:11:47.035592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.941 [2024-11-06 14:11:47.035606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.941 qpair failed and we were unable to recover it. 00:30:00.941 [2024-11-06 14:11:47.045535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.941 [2024-11-06 14:11:47.045583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.941 [2024-11-06 14:11:47.045596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.941 [2024-11-06 14:11:47.045603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.941 [2024-11-06 14:11:47.045610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.941 [2024-11-06 14:11:47.045624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.941 qpair failed and we were unable to recover it. 00:30:00.941 [2024-11-06 14:11:47.055709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.941 [2024-11-06 14:11:47.055814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.941 [2024-11-06 14:11:47.055828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.941 [2024-11-06 14:11:47.055835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.941 [2024-11-06 14:11:47.055841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.941 [2024-11-06 14:11:47.055855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.941 qpair failed and we were unable to recover it. 00:30:00.942 [2024-11-06 14:11:47.065730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.942 [2024-11-06 14:11:47.065780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.942 [2024-11-06 14:11:47.065793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.942 [2024-11-06 14:11:47.065800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.942 [2024-11-06 14:11:47.065807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.942 [2024-11-06 14:11:47.065820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.942 qpair failed and we were unable to recover it. 00:30:00.942 [2024-11-06 14:11:47.075627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.942 [2024-11-06 14:11:47.075668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.942 [2024-11-06 14:11:47.075681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.942 [2024-11-06 14:11:47.075691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.942 [2024-11-06 14:11:47.075697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.942 [2024-11-06 14:11:47.075711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.942 qpair failed and we were unable to recover it. 00:30:00.942 [2024-11-06 14:11:47.085836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.942 [2024-11-06 14:11:47.085885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.942 [2024-11-06 14:11:47.085898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.942 [2024-11-06 14:11:47.085906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.942 [2024-11-06 14:11:47.085912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.942 [2024-11-06 14:11:47.085926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.942 qpair failed and we were unable to recover it. 00:30:00.942 [2024-11-06 14:11:47.095844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.942 [2024-11-06 14:11:47.095892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.942 [2024-11-06 14:11:47.095905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.942 [2024-11-06 14:11:47.095912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.942 [2024-11-06 14:11:47.095918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.942 [2024-11-06 14:11:47.095932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.942 qpair failed and we were unable to recover it. 00:30:00.942 [2024-11-06 14:11:47.105890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.942 [2024-11-06 14:11:47.105983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.942 [2024-11-06 14:11:47.105997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.942 [2024-11-06 14:11:47.106004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.942 [2024-11-06 14:11:47.106010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.942 [2024-11-06 14:11:47.106024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.942 qpair failed and we were unable to recover it. 00:30:00.942 [2024-11-06 14:11:47.115908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.942 [2024-11-06 14:11:47.115954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.942 [2024-11-06 14:11:47.115966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.942 [2024-11-06 14:11:47.115973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.942 [2024-11-06 14:11:47.115980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.942 [2024-11-06 14:11:47.115998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.942 qpair failed and we were unable to recover it. 00:30:00.942 [2024-11-06 14:11:47.125862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.942 [2024-11-06 14:11:47.125904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.942 [2024-11-06 14:11:47.125917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.942 [2024-11-06 14:11:47.125924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.942 [2024-11-06 14:11:47.125931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.942 [2024-11-06 14:11:47.125944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.942 qpair failed and we were unable to recover it. 00:30:00.942 [2024-11-06 14:11:47.135952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.942 [2024-11-06 14:11:47.135995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.942 [2024-11-06 14:11:47.136008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.942 [2024-11-06 14:11:47.136015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.942 [2024-11-06 14:11:47.136022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.942 [2024-11-06 14:11:47.136035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.942 qpair failed and we were unable to recover it. 00:30:00.942 [2024-11-06 14:11:47.145941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.942 [2024-11-06 14:11:47.145992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.942 [2024-11-06 14:11:47.146005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.942 [2024-11-06 14:11:47.146012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.942 [2024-11-06 14:11:47.146018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.942 [2024-11-06 14:11:47.146032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.942 qpair failed and we were unable to recover it. 00:30:00.942 [2024-11-06 14:11:47.155985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.942 [2024-11-06 14:11:47.156026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.942 [2024-11-06 14:11:47.156039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.942 [2024-11-06 14:11:47.156046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.942 [2024-11-06 14:11:47.156053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.942 [2024-11-06 14:11:47.156066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.942 qpair failed and we were unable to recover it. 00:30:00.942 [2024-11-06 14:11:47.165986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.942 [2024-11-06 14:11:47.166043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.942 [2024-11-06 14:11:47.166056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.942 [2024-11-06 14:11:47.166063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.942 [2024-11-06 14:11:47.166069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.942 [2024-11-06 14:11:47.166083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.942 qpair failed and we were unable to recover it. 00:30:00.942 [2024-11-06 14:11:47.176005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.942 [2024-11-06 14:11:47.176048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.942 [2024-11-06 14:11:47.176060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.942 [2024-11-06 14:11:47.176067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.942 [2024-11-06 14:11:47.176074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.942 [2024-11-06 14:11:47.176088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.942 qpair failed and we were unable to recover it. 00:30:00.942 [2024-11-06 14:11:47.186079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.942 [2024-11-06 14:11:47.186130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.942 [2024-11-06 14:11:47.186142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.942 [2024-11-06 14:11:47.186149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.942 [2024-11-06 14:11:47.186156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.942 [2024-11-06 14:11:47.186169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.943 qpair failed and we were unable to recover it. 00:30:00.943 [2024-11-06 14:11:47.195950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.943 [2024-11-06 14:11:47.195995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.943 [2024-11-06 14:11:47.196008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.943 [2024-11-06 14:11:47.196015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.943 [2024-11-06 14:11:47.196022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.943 [2024-11-06 14:11:47.196035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.943 qpair failed and we were unable to recover it. 00:30:00.943 [2024-11-06 14:11:47.205988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.943 [2024-11-06 14:11:47.206030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.943 [2024-11-06 14:11:47.206047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.943 [2024-11-06 14:11:47.206055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.943 [2024-11-06 14:11:47.206061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.943 [2024-11-06 14:11:47.206075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.943 qpair failed and we were unable to recover it. 00:30:00.943 [2024-11-06 14:11:47.216145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.943 [2024-11-06 14:11:47.216192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.943 [2024-11-06 14:11:47.216205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.943 [2024-11-06 14:11:47.216212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.943 [2024-11-06 14:11:47.216218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:00.943 [2024-11-06 14:11:47.216232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.943 qpair failed and we were unable to recover it. 00:30:01.205 [2024-11-06 14:11:47.226036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.205 [2024-11-06 14:11:47.226081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.205 [2024-11-06 14:11:47.226094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.205 [2024-11-06 14:11:47.226101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.205 [2024-11-06 14:11:47.226108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.205 [2024-11-06 14:11:47.226121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.205 qpair failed and we were unable to recover it. 00:30:01.205 [2024-11-06 14:11:47.236205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.205 [2024-11-06 14:11:47.236248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.205 [2024-11-06 14:11:47.236261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.205 [2024-11-06 14:11:47.236268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.205 [2024-11-06 14:11:47.236274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.205 [2024-11-06 14:11:47.236288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.205 qpair failed and we were unable to recover it. 00:30:01.205 [2024-11-06 14:11:47.246243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.205 [2024-11-06 14:11:47.246323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.205 [2024-11-06 14:11:47.246336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.205 [2024-11-06 14:11:47.246343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.205 [2024-11-06 14:11:47.246353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.205 [2024-11-06 14:11:47.246367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.205 qpair failed and we were unable to recover it. 00:30:01.205 [2024-11-06 14:11:47.256115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.205 [2024-11-06 14:11:47.256160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.205 [2024-11-06 14:11:47.256173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.205 [2024-11-06 14:11:47.256180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.205 [2024-11-06 14:11:47.256186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.205 [2024-11-06 14:11:47.256200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.205 qpair failed and we were unable to recover it. 00:30:01.205 [2024-11-06 14:11:47.266290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.205 [2024-11-06 14:11:47.266349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.205 [2024-11-06 14:11:47.266362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.205 [2024-11-06 14:11:47.266370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.205 [2024-11-06 14:11:47.266376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.205 [2024-11-06 14:11:47.266390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.205 qpair failed and we were unable to recover it. 00:30:01.205 [2024-11-06 14:11:47.276162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.205 [2024-11-06 14:11:47.276209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.205 [2024-11-06 14:11:47.276222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.206 [2024-11-06 14:11:47.276229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.206 [2024-11-06 14:11:47.276235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.206 [2024-11-06 14:11:47.276249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.206 qpair failed and we were unable to recover it. 00:30:01.206 [2024-11-06 14:11:47.286326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.206 [2024-11-06 14:11:47.286370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.206 [2024-11-06 14:11:47.286383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.206 [2024-11-06 14:11:47.286390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.206 [2024-11-06 14:11:47.286396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.206 [2024-11-06 14:11:47.286410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.206 qpair failed and we were unable to recover it. 00:30:01.206 [2024-11-06 14:11:47.296385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.206 [2024-11-06 14:11:47.296433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.206 [2024-11-06 14:11:47.296446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.206 [2024-11-06 14:11:47.296453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.206 [2024-11-06 14:11:47.296460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.206 [2024-11-06 14:11:47.296474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.206 qpair failed and we were unable to recover it. 00:30:01.206 [2024-11-06 14:11:47.306254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.206 [2024-11-06 14:11:47.306298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.206 [2024-11-06 14:11:47.306311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.206 [2024-11-06 14:11:47.306319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.206 [2024-11-06 14:11:47.306325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.206 [2024-11-06 14:11:47.306339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.206 qpair failed and we were unable to recover it. 00:30:01.206 [2024-11-06 14:11:47.316289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.206 [2024-11-06 14:11:47.316332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.206 [2024-11-06 14:11:47.316345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.206 [2024-11-06 14:11:47.316352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.206 [2024-11-06 14:11:47.316358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.206 [2024-11-06 14:11:47.316371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.206 qpair failed and we were unable to recover it. 00:30:01.206 [2024-11-06 14:11:47.326423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.206 [2024-11-06 14:11:47.326466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.206 [2024-11-06 14:11:47.326479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.206 [2024-11-06 14:11:47.326486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.206 [2024-11-06 14:11:47.326493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.206 [2024-11-06 14:11:47.326506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.206 qpair failed and we were unable to recover it. 00:30:01.206 [2024-11-06 14:11:47.336463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.206 [2024-11-06 14:11:47.336510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.206 [2024-11-06 14:11:47.336527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.206 [2024-11-06 14:11:47.336534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.206 [2024-11-06 14:11:47.336540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.206 [2024-11-06 14:11:47.336554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.206 qpair failed and we were unable to recover it. 00:30:01.206 [2024-11-06 14:11:47.346505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.206 [2024-11-06 14:11:47.346549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.206 [2024-11-06 14:11:47.346562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.206 [2024-11-06 14:11:47.346569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.206 [2024-11-06 14:11:47.346576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.206 [2024-11-06 14:11:47.346590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.206 qpair failed and we were unable to recover it. 00:30:01.206 [2024-11-06 14:11:47.356520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.206 [2024-11-06 14:11:47.356564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.206 [2024-11-06 14:11:47.356577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.206 [2024-11-06 14:11:47.356584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.206 [2024-11-06 14:11:47.356590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.206 [2024-11-06 14:11:47.356604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.206 qpair failed and we were unable to recover it. 00:30:01.206 [2024-11-06 14:11:47.366548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.206 [2024-11-06 14:11:47.366594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.206 [2024-11-06 14:11:47.366607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.206 [2024-11-06 14:11:47.366615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.206 [2024-11-06 14:11:47.366622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.206 [2024-11-06 14:11:47.366636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.206 qpair failed and we were unable to recover it. 00:30:01.206 [2024-11-06 14:11:47.376577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.206 [2024-11-06 14:11:47.376619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.206 [2024-11-06 14:11:47.376632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.206 [2024-11-06 14:11:47.376639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.206 [2024-11-06 14:11:47.376648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.206 [2024-11-06 14:11:47.376662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.206 qpair failed and we were unable to recover it. 00:30:01.206 [2024-11-06 14:11:47.386652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.206 [2024-11-06 14:11:47.386704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.206 [2024-11-06 14:11:47.386717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.206 [2024-11-06 14:11:47.386724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.206 [2024-11-06 14:11:47.386730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.206 [2024-11-06 14:11:47.386749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.206 qpair failed and we were unable to recover it. 00:30:01.206 [2024-11-06 14:11:47.396600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.206 [2024-11-06 14:11:47.396642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.206 [2024-11-06 14:11:47.396655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.206 [2024-11-06 14:11:47.396661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.206 [2024-11-06 14:11:47.396668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.206 [2024-11-06 14:11:47.396682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.206 qpair failed and we were unable to recover it. 00:30:01.206 [2024-11-06 14:11:47.406657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.206 [2024-11-06 14:11:47.406698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.206 [2024-11-06 14:11:47.406712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.206 [2024-11-06 14:11:47.406719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.207 [2024-11-06 14:11:47.406725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.207 [2024-11-06 14:11:47.406739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.207 qpair failed and we were unable to recover it. 00:30:01.207 [2024-11-06 14:11:47.416583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.207 [2024-11-06 14:11:47.416630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.207 [2024-11-06 14:11:47.416643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.207 [2024-11-06 14:11:47.416650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.207 [2024-11-06 14:11:47.416657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.207 [2024-11-06 14:11:47.416671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.207 qpair failed and we were unable to recover it. 00:30:01.207 [2024-11-06 14:11:47.426726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.207 [2024-11-06 14:11:47.426776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.207 [2024-11-06 14:11:47.426791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.207 [2024-11-06 14:11:47.426798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.207 [2024-11-06 14:11:47.426804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.207 [2024-11-06 14:11:47.426818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.207 qpair failed and we were unable to recover it. 00:30:01.207 [2024-11-06 14:11:47.436639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.207 [2024-11-06 14:11:47.436694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.207 [2024-11-06 14:11:47.436707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.207 [2024-11-06 14:11:47.436714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.207 [2024-11-06 14:11:47.436721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.207 [2024-11-06 14:11:47.436734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.207 qpair failed and we were unable to recover it. 00:30:01.207 [2024-11-06 14:11:47.446764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.207 [2024-11-06 14:11:47.446809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.207 [2024-11-06 14:11:47.446822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.207 [2024-11-06 14:11:47.446829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.207 [2024-11-06 14:11:47.446836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.207 [2024-11-06 14:11:47.446850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.207 qpair failed and we were unable to recover it. 00:30:01.207 [2024-11-06 14:11:47.456663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.207 [2024-11-06 14:11:47.456711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.207 [2024-11-06 14:11:47.456724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.207 [2024-11-06 14:11:47.456731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.207 [2024-11-06 14:11:47.456737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.207 [2024-11-06 14:11:47.456754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.207 qpair failed and we were unable to recover it. 00:30:01.207 [2024-11-06 14:11:47.466835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.207 [2024-11-06 14:11:47.466887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.207 [2024-11-06 14:11:47.466900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.207 [2024-11-06 14:11:47.466907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.207 [2024-11-06 14:11:47.466914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.207 [2024-11-06 14:11:47.466927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.207 qpair failed and we were unable to recover it. 00:30:01.207 [2024-11-06 14:11:47.476850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.207 [2024-11-06 14:11:47.476930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.207 [2024-11-06 14:11:47.476943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.207 [2024-11-06 14:11:47.476950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.207 [2024-11-06 14:11:47.476956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.207 [2024-11-06 14:11:47.476971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.207 qpair failed and we were unable to recover it. 00:30:01.477 [2024-11-06 14:11:47.486874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.477 [2024-11-06 14:11:47.486922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.477 [2024-11-06 14:11:47.486935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.477 [2024-11-06 14:11:47.486942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.477 [2024-11-06 14:11:47.486948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.477 [2024-11-06 14:11:47.486962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.477 qpair failed and we were unable to recover it. 00:30:01.477 [2024-11-06 14:11:47.496897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.477 [2024-11-06 14:11:47.496944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.477 [2024-11-06 14:11:47.496957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.477 [2024-11-06 14:11:47.496964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.477 [2024-11-06 14:11:47.496970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.477 [2024-11-06 14:11:47.496984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.477 qpair failed and we were unable to recover it. 00:30:01.477 [2024-11-06 14:11:47.506893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.477 [2024-11-06 14:11:47.506941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.477 [2024-11-06 14:11:47.506953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.477 [2024-11-06 14:11:47.506965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.477 [2024-11-06 14:11:47.506971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.477 [2024-11-06 14:11:47.506985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.477 qpair failed and we were unable to recover it. 00:30:01.477 [2024-11-06 14:11:47.516957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.477 [2024-11-06 14:11:47.517000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.477 [2024-11-06 14:11:47.517012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.477 [2024-11-06 14:11:47.517019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.477 [2024-11-06 14:11:47.517025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.477 [2024-11-06 14:11:47.517039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.477 qpair failed and we were unable to recover it. 00:30:01.477 [2024-11-06 14:11:47.526974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.477 [2024-11-06 14:11:47.527015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.477 [2024-11-06 14:11:47.527029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.477 [2024-11-06 14:11:47.527035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.477 [2024-11-06 14:11:47.527041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.478 [2024-11-06 14:11:47.527055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.478 qpair failed and we were unable to recover it. 00:30:01.478 [2024-11-06 14:11:47.537046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.478 [2024-11-06 14:11:47.537134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.478 [2024-11-06 14:11:47.537148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.478 [2024-11-06 14:11:47.537155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.478 [2024-11-06 14:11:47.537161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.478 [2024-11-06 14:11:47.537175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.478 qpair failed and we were unable to recover it. 00:30:01.478 [2024-11-06 14:11:47.547050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.478 [2024-11-06 14:11:47.547096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.478 [2024-11-06 14:11:47.547109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.478 [2024-11-06 14:11:47.547116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.478 [2024-11-06 14:11:47.547122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.478 [2024-11-06 14:11:47.547142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.478 qpair failed and we were unable to recover it. 00:30:01.478 [2024-11-06 14:11:47.557068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.478 [2024-11-06 14:11:47.557111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.478 [2024-11-06 14:11:47.557124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.478 [2024-11-06 14:11:47.557132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.478 [2024-11-06 14:11:47.557138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.478 [2024-11-06 14:11:47.557152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.478 qpair failed and we were unable to recover it. 00:30:01.478 [2024-11-06 14:11:47.567102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.478 [2024-11-06 14:11:47.567146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.478 [2024-11-06 14:11:47.567159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.478 [2024-11-06 14:11:47.567166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.478 [2024-11-06 14:11:47.567172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.478 [2024-11-06 14:11:47.567186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.478 qpair failed and we were unable to recover it. 00:30:01.478 [2024-11-06 14:11:47.577131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.478 [2024-11-06 14:11:47.577179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.478 [2024-11-06 14:11:47.577191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.478 [2024-11-06 14:11:47.577198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.478 [2024-11-06 14:11:47.577204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.478 [2024-11-06 14:11:47.577218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.478 qpair failed and we were unable to recover it. 00:30:01.478 [2024-11-06 14:11:47.587148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.478 [2024-11-06 14:11:47.587195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.478 [2024-11-06 14:11:47.587209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.478 [2024-11-06 14:11:47.587216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.478 [2024-11-06 14:11:47.587222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.478 [2024-11-06 14:11:47.587236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.478 qpair failed and we were unable to recover it. 00:30:01.478 [2024-11-06 14:11:47.597193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.478 [2024-11-06 14:11:47.597243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.478 [2024-11-06 14:11:47.597257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.478 [2024-11-06 14:11:47.597264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.478 [2024-11-06 14:11:47.597270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.478 [2024-11-06 14:11:47.597284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.478 qpair failed and we were unable to recover it. 00:30:01.478 [2024-11-06 14:11:47.607208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.478 [2024-11-06 14:11:47.607250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.478 [2024-11-06 14:11:47.607263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.478 [2024-11-06 14:11:47.607270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.478 [2024-11-06 14:11:47.607277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.478 [2024-11-06 14:11:47.607290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.478 qpair failed and we were unable to recover it. 00:30:01.478 [2024-11-06 14:11:47.617246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.478 [2024-11-06 14:11:47.617294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.478 [2024-11-06 14:11:47.617307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.478 [2024-11-06 14:11:47.617315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.478 [2024-11-06 14:11:47.617322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.478 [2024-11-06 14:11:47.617336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.478 qpair failed and we were unable to recover it. 00:30:01.478 [2024-11-06 14:11:47.627333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.478 [2024-11-06 14:11:47.627378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.478 [2024-11-06 14:11:47.627390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.478 [2024-11-06 14:11:47.627398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.478 [2024-11-06 14:11:47.627404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.478 [2024-11-06 14:11:47.627418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.478 qpair failed and we were unable to recover it. 00:30:01.478 [2024-11-06 14:11:47.637284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.478 [2024-11-06 14:11:47.637364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.478 [2024-11-06 14:11:47.637381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.478 [2024-11-06 14:11:47.637388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.478 [2024-11-06 14:11:47.637395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.478 [2024-11-06 14:11:47.637408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.478 qpair failed and we were unable to recover it. 00:30:01.478 [2024-11-06 14:11:47.647307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.478 [2024-11-06 14:11:47.647400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.478 [2024-11-06 14:11:47.647413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.478 [2024-11-06 14:11:47.647420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.478 [2024-11-06 14:11:47.647426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.479 [2024-11-06 14:11:47.647440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.479 qpair failed and we were unable to recover it. 00:30:01.479 [2024-11-06 14:11:47.657348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.479 [2024-11-06 14:11:47.657395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.479 [2024-11-06 14:11:47.657408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.479 [2024-11-06 14:11:47.657415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.479 [2024-11-06 14:11:47.657421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.479 [2024-11-06 14:11:47.657435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.479 qpair failed and we were unable to recover it. 00:30:01.479 [2024-11-06 14:11:47.667251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.479 [2024-11-06 14:11:47.667303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.479 [2024-11-06 14:11:47.667315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.479 [2024-11-06 14:11:47.667322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.479 [2024-11-06 14:11:47.667328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.479 [2024-11-06 14:11:47.667342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.479 qpair failed and we were unable to recover it. 00:30:01.479 [2024-11-06 14:11:47.677260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.479 [2024-11-06 14:11:47.677308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.479 [2024-11-06 14:11:47.677322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.479 [2024-11-06 14:11:47.677329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.479 [2024-11-06 14:11:47.677336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.479 [2024-11-06 14:11:47.677354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.479 qpair failed and we were unable to recover it. 00:30:01.479 [2024-11-06 14:11:47.687447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.479 [2024-11-06 14:11:47.687493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.479 [2024-11-06 14:11:47.687506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.479 [2024-11-06 14:11:47.687513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.479 [2024-11-06 14:11:47.687519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.479 [2024-11-06 14:11:47.687533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.479 qpair failed and we were unable to recover it. 00:30:01.479 [2024-11-06 14:11:47.697424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.479 [2024-11-06 14:11:47.697472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.479 [2024-11-06 14:11:47.697485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.479 [2024-11-06 14:11:47.697492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.479 [2024-11-06 14:11:47.697499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.479 [2024-11-06 14:11:47.697512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.479 qpair failed and we were unable to recover it. 00:30:01.479 [2024-11-06 14:11:47.707476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.479 [2024-11-06 14:11:47.707525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.479 [2024-11-06 14:11:47.707539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.479 [2024-11-06 14:11:47.707546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.479 [2024-11-06 14:11:47.707552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.479 [2024-11-06 14:11:47.707566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.479 qpair failed and we were unable to recover it. 00:30:01.479 [2024-11-06 14:11:47.717473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.479 [2024-11-06 14:11:47.717514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.479 [2024-11-06 14:11:47.717526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.479 [2024-11-06 14:11:47.717533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.479 [2024-11-06 14:11:47.717540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.479 [2024-11-06 14:11:47.717554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.479 qpair failed and we were unable to recover it. 00:30:01.479 [2024-11-06 14:11:47.727531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.479 [2024-11-06 14:11:47.727578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.479 [2024-11-06 14:11:47.727591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.479 [2024-11-06 14:11:47.727598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.479 [2024-11-06 14:11:47.727604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.479 [2024-11-06 14:11:47.727618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.479 qpair failed and we were unable to recover it. 00:30:01.479 [2024-11-06 14:11:47.737569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.479 [2024-11-06 14:11:47.737615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.479 [2024-11-06 14:11:47.737628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.479 [2024-11-06 14:11:47.737635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.479 [2024-11-06 14:11:47.737641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.479 [2024-11-06 14:11:47.737655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.479 qpair failed and we were unable to recover it. 00:30:01.874 [2024-11-06 14:11:47.747573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.874 [2024-11-06 14:11:47.747622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.874 [2024-11-06 14:11:47.747635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.874 [2024-11-06 14:11:47.747642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.874 [2024-11-06 14:11:47.747648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.874 [2024-11-06 14:11:47.747662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.874 qpair failed and we were unable to recover it. 00:30:01.874 [2024-11-06 14:11:47.757568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.874 [2024-11-06 14:11:47.757614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.874 [2024-11-06 14:11:47.757627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.874 [2024-11-06 14:11:47.757635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.874 [2024-11-06 14:11:47.757641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.874 [2024-11-06 14:11:47.757655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.874 qpair failed and we were unable to recover it. 00:30:01.874 [2024-11-06 14:11:47.767626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.874 [2024-11-06 14:11:47.767715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.874 [2024-11-06 14:11:47.767731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.874 [2024-11-06 14:11:47.767738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.875 [2024-11-06 14:11:47.767749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.875 [2024-11-06 14:11:47.767764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.875 qpair failed and we were unable to recover it. 00:30:01.875 [2024-11-06 14:11:47.777654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.875 [2024-11-06 14:11:47.777714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.875 [2024-11-06 14:11:47.777727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.875 [2024-11-06 14:11:47.777734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.875 [2024-11-06 14:11:47.777740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.875 [2024-11-06 14:11:47.777758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.875 qpair failed and we were unable to recover it. 00:30:01.875 [2024-11-06 14:11:47.787676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.875 [2024-11-06 14:11:47.787726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.875 [2024-11-06 14:11:47.787739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.875 [2024-11-06 14:11:47.787750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.875 [2024-11-06 14:11:47.787756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.875 [2024-11-06 14:11:47.787771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.875 qpair failed and we were unable to recover it. 00:30:01.875 [2024-11-06 14:11:47.797710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.875 [2024-11-06 14:11:47.797798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.875 [2024-11-06 14:11:47.797812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.875 [2024-11-06 14:11:47.797819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.875 [2024-11-06 14:11:47.797825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.875 [2024-11-06 14:11:47.797839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.875 qpair failed and we were unable to recover it. 00:30:01.875 [2024-11-06 14:11:47.807728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.875 [2024-11-06 14:11:47.807772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.875 [2024-11-06 14:11:47.807786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.875 [2024-11-06 14:11:47.807793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.875 [2024-11-06 14:11:47.807802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.875 [2024-11-06 14:11:47.807816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.875 qpair failed and we were unable to recover it. 00:30:01.875 [2024-11-06 14:11:47.817768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.875 [2024-11-06 14:11:47.817813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.875 [2024-11-06 14:11:47.817826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.875 [2024-11-06 14:11:47.817833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.875 [2024-11-06 14:11:47.817839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.875 [2024-11-06 14:11:47.817853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.875 qpair failed and we were unable to recover it. 00:30:01.875 [2024-11-06 14:11:47.827825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.875 [2024-11-06 14:11:47.827874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.875 [2024-11-06 14:11:47.827886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.875 [2024-11-06 14:11:47.827893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.875 [2024-11-06 14:11:47.827900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.875 [2024-11-06 14:11:47.827914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.875 qpair failed and we were unable to recover it. 00:30:01.875 [2024-11-06 14:11:47.837791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.875 [2024-11-06 14:11:47.837835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.875 [2024-11-06 14:11:47.837848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.875 [2024-11-06 14:11:47.837855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.875 [2024-11-06 14:11:47.837861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.875 [2024-11-06 14:11:47.837875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.875 qpair failed and we were unable to recover it. 00:30:01.875 [2024-11-06 14:11:47.847843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.875 [2024-11-06 14:11:47.847889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.875 [2024-11-06 14:11:47.847902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.875 [2024-11-06 14:11:47.847909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.875 [2024-11-06 14:11:47.847915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.875 [2024-11-06 14:11:47.847929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.875 qpair failed and we were unable to recover it. 00:30:01.875 [2024-11-06 14:11:47.857854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.875 [2024-11-06 14:11:47.857898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.875 [2024-11-06 14:11:47.857911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.875 [2024-11-06 14:11:47.857917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.875 [2024-11-06 14:11:47.857924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.875 [2024-11-06 14:11:47.857938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.875 qpair failed and we were unable to recover it. 00:30:01.875 [2024-11-06 14:11:47.867909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.875 [2024-11-06 14:11:47.867961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.875 [2024-11-06 14:11:47.867976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.875 [2024-11-06 14:11:47.867983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.875 [2024-11-06 14:11:47.867989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.875 [2024-11-06 14:11:47.868006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.875 qpair failed and we were unable to recover it. 00:30:01.875 [2024-11-06 14:11:47.877787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.875 [2024-11-06 14:11:47.877837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.875 [2024-11-06 14:11:47.877851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.875 [2024-11-06 14:11:47.877858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.875 [2024-11-06 14:11:47.877864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.875 [2024-11-06 14:11:47.877879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.875 qpair failed and we were unable to recover it. 00:30:01.875 [2024-11-06 14:11:47.887851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.875 [2024-11-06 14:11:47.887913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.875 [2024-11-06 14:11:47.887926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.875 [2024-11-06 14:11:47.887933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.875 [2024-11-06 14:11:47.887939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.875 [2024-11-06 14:11:47.887953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.876 qpair failed and we were unable to recover it. 00:30:01.876 [2024-11-06 14:11:47.897964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.876 [2024-11-06 14:11:47.898013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.876 [2024-11-06 14:11:47.898029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.876 [2024-11-06 14:11:47.898036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.876 [2024-11-06 14:11:47.898042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.876 [2024-11-06 14:11:47.898056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.876 qpair failed and we were unable to recover it. 00:30:01.876 [2024-11-06 14:11:47.907998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.876 [2024-11-06 14:11:47.908049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.876 [2024-11-06 14:11:47.908062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.876 [2024-11-06 14:11:47.908069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.876 [2024-11-06 14:11:47.908075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.876 [2024-11-06 14:11:47.908089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.876 qpair failed and we were unable to recover it. 00:30:01.876 [2024-11-06 14:11:47.918018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.876 [2024-11-06 14:11:47.918065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.876 [2024-11-06 14:11:47.918078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.876 [2024-11-06 14:11:47.918085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.876 [2024-11-06 14:11:47.918091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.876 [2024-11-06 14:11:47.918104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.876 qpair failed and we were unable to recover it. 00:30:01.876 [2024-11-06 14:11:47.928067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.876 [2024-11-06 14:11:47.928142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.876 [2024-11-06 14:11:47.928155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.876 [2024-11-06 14:11:47.928162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.876 [2024-11-06 14:11:47.928169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.876 [2024-11-06 14:11:47.928182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.876 qpair failed and we were unable to recover it. 00:30:01.876 [2024-11-06 14:11:47.938052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.876 [2024-11-06 14:11:47.938120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.876 [2024-11-06 14:11:47.938133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.876 [2024-11-06 14:11:47.938144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.876 [2024-11-06 14:11:47.938150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.876 [2024-11-06 14:11:47.938164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.876 qpair failed and we were unable to recover it. 00:30:01.876 [2024-11-06 14:11:47.948125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.876 [2024-11-06 14:11:47.948204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.876 [2024-11-06 14:11:47.948217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.876 [2024-11-06 14:11:47.948224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.876 [2024-11-06 14:11:47.948230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.876 [2024-11-06 14:11:47.948244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.876 qpair failed and we were unable to recover it. 00:30:01.876 [2024-11-06 14:11:47.958149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.876 [2024-11-06 14:11:47.958194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.876 [2024-11-06 14:11:47.958207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.876 [2024-11-06 14:11:47.958214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.876 [2024-11-06 14:11:47.958220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.876 [2024-11-06 14:11:47.958234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.876 qpair failed and we were unable to recover it. 00:30:01.876 [2024-11-06 14:11:47.968175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.876 [2024-11-06 14:11:47.968223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.876 [2024-11-06 14:11:47.968237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.876 [2024-11-06 14:11:47.968244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.876 [2024-11-06 14:11:47.968251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.876 [2024-11-06 14:11:47.968269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.876 qpair failed and we were unable to recover it. 00:30:01.876 [2024-11-06 14:11:47.978206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.876 [2024-11-06 14:11:47.978251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.876 [2024-11-06 14:11:47.978264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.876 [2024-11-06 14:11:47.978271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.876 [2024-11-06 14:11:47.978278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.876 [2024-11-06 14:11:47.978292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.876 qpair failed and we were unable to recover it. 00:30:01.876 [2024-11-06 14:11:47.988193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.876 [2024-11-06 14:11:47.988243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.876 [2024-11-06 14:11:47.988256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.876 [2024-11-06 14:11:47.988263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.876 [2024-11-06 14:11:47.988269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.876 [2024-11-06 14:11:47.988283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.876 qpair failed and we were unable to recover it. 00:30:01.876 [2024-11-06 14:11:47.998246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.876 [2024-11-06 14:11:47.998307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.876 [2024-11-06 14:11:47.998320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.876 [2024-11-06 14:11:47.998326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.876 [2024-11-06 14:11:47.998333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.876 [2024-11-06 14:11:47.998346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.876 qpair failed and we were unable to recover it. 00:30:01.876 [2024-11-06 14:11:48.008339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.876 [2024-11-06 14:11:48.008402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.876 [2024-11-06 14:11:48.008415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.876 [2024-11-06 14:11:48.008422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.876 [2024-11-06 14:11:48.008429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.876 [2024-11-06 14:11:48.008442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.876 qpair failed and we were unable to recover it. 00:30:01.876 [2024-11-06 14:11:48.018185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.876 [2024-11-06 14:11:48.018231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.876 [2024-11-06 14:11:48.018244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.877 [2024-11-06 14:11:48.018251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.877 [2024-11-06 14:11:48.018257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.877 [2024-11-06 14:11:48.018271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.877 qpair failed and we were unable to recover it. 00:30:01.877 [2024-11-06 14:11:48.028224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.877 [2024-11-06 14:11:48.028276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.877 [2024-11-06 14:11:48.028289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.877 [2024-11-06 14:11:48.028296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.877 [2024-11-06 14:11:48.028302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.877 [2024-11-06 14:11:48.028316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.877 qpair failed and we were unable to recover it. 00:30:01.877 [2024-11-06 14:11:48.038360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.877 [2024-11-06 14:11:48.038403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.877 [2024-11-06 14:11:48.038416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.877 [2024-11-06 14:11:48.038423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.877 [2024-11-06 14:11:48.038429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.877 [2024-11-06 14:11:48.038443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.877 qpair failed and we were unable to recover it. 00:30:01.877 [2024-11-06 14:11:48.048255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.877 [2024-11-06 14:11:48.048297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.877 [2024-11-06 14:11:48.048310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.877 [2024-11-06 14:11:48.048317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.877 [2024-11-06 14:11:48.048324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.877 [2024-11-06 14:11:48.048337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.877 qpair failed and we were unable to recover it. 00:30:01.877 [2024-11-06 14:11:48.058424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.877 [2024-11-06 14:11:48.058468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.877 [2024-11-06 14:11:48.058481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.877 [2024-11-06 14:11:48.058488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.877 [2024-11-06 14:11:48.058494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.877 [2024-11-06 14:11:48.058508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.877 qpair failed and we were unable to recover it. 00:30:01.877 [2024-11-06 14:11:48.068462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.877 [2024-11-06 14:11:48.068510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.877 [2024-11-06 14:11:48.068523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.877 [2024-11-06 14:11:48.068533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.877 [2024-11-06 14:11:48.068539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.877 [2024-11-06 14:11:48.068553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.877 qpair failed and we were unable to recover it. 00:30:01.877 [2024-11-06 14:11:48.078362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.877 [2024-11-06 14:11:48.078406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.877 [2024-11-06 14:11:48.078419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.877 [2024-11-06 14:11:48.078426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.877 [2024-11-06 14:11:48.078432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.877 [2024-11-06 14:11:48.078446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.877 qpair failed and we were unable to recover it. 00:30:01.877 [2024-11-06 14:11:48.088497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.877 [2024-11-06 14:11:48.088541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.877 [2024-11-06 14:11:48.088554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.877 [2024-11-06 14:11:48.088561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.877 [2024-11-06 14:11:48.088567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.877 [2024-11-06 14:11:48.088581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.877 qpair failed and we were unable to recover it. 00:30:01.877 [2024-11-06 14:11:48.098525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.877 [2024-11-06 14:11:48.098571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.877 [2024-11-06 14:11:48.098584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.877 [2024-11-06 14:11:48.098591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.877 [2024-11-06 14:11:48.098597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.877 [2024-11-06 14:11:48.098611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.877 qpair failed and we were unable to recover it. 00:30:01.877 [2024-11-06 14:11:48.108571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.877 [2024-11-06 14:11:48.108616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.877 [2024-11-06 14:11:48.108629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.877 [2024-11-06 14:11:48.108636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.877 [2024-11-06 14:11:48.108643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.877 [2024-11-06 14:11:48.108660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.877 qpair failed and we were unable to recover it. 00:30:01.877 [2024-11-06 14:11:48.118558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.877 [2024-11-06 14:11:48.118608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.877 [2024-11-06 14:11:48.118621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.877 [2024-11-06 14:11:48.118628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.877 [2024-11-06 14:11:48.118635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.877 [2024-11-06 14:11:48.118649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.877 qpair failed and we were unable to recover it. 00:30:01.877 [2024-11-06 14:11:48.128665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.877 [2024-11-06 14:11:48.128710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.878 [2024-11-06 14:11:48.128723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.878 [2024-11-06 14:11:48.128730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.878 [2024-11-06 14:11:48.128736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.878 [2024-11-06 14:11:48.128760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.878 qpair failed and we were unable to recover it. 00:30:01.878 [2024-11-06 14:11:48.138634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.878 [2024-11-06 14:11:48.138680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.878 [2024-11-06 14:11:48.138693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.878 [2024-11-06 14:11:48.138700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.878 [2024-11-06 14:11:48.138706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.878 [2024-11-06 14:11:48.138720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.878 qpair failed and we were unable to recover it. 00:30:01.878 [2024-11-06 14:11:48.148674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.878 [2024-11-06 14:11:48.148723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.878 [2024-11-06 14:11:48.148736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.878 [2024-11-06 14:11:48.148743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.878 [2024-11-06 14:11:48.148754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:01.878 [2024-11-06 14:11:48.148768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.878 qpair failed and we were unable to recover it. 00:30:02.139 [2024-11-06 14:11:48.158705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.139 [2024-11-06 14:11:48.158753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.139 [2024-11-06 14:11:48.158767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.139 [2024-11-06 14:11:48.158774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.139 [2024-11-06 14:11:48.158780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.139 [2024-11-06 14:11:48.158795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.139 qpair failed and we were unable to recover it. 00:30:02.139 [2024-11-06 14:11:48.168667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.139 [2024-11-06 14:11:48.168707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.139 [2024-11-06 14:11:48.168720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.139 [2024-11-06 14:11:48.168727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.139 [2024-11-06 14:11:48.168734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.139 [2024-11-06 14:11:48.168750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.139 qpair failed and we were unable to recover it. 00:30:02.139 [2024-11-06 14:11:48.178741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.139 [2024-11-06 14:11:48.178795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.139 [2024-11-06 14:11:48.178808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.139 [2024-11-06 14:11:48.178815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.139 [2024-11-06 14:11:48.178821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.139 [2024-11-06 14:11:48.178835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.139 qpair failed and we were unable to recover it. 00:30:02.139 [2024-11-06 14:11:48.188772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.139 [2024-11-06 14:11:48.188817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.139 [2024-11-06 14:11:48.188830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.139 [2024-11-06 14:11:48.188837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.139 [2024-11-06 14:11:48.188843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.139 [2024-11-06 14:11:48.188857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.139 qpair failed and we were unable to recover it. 00:30:02.139 [2024-11-06 14:11:48.198792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.139 [2024-11-06 14:11:48.198877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.139 [2024-11-06 14:11:48.198893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.139 [2024-11-06 14:11:48.198901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.140 [2024-11-06 14:11:48.198907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.140 [2024-11-06 14:11:48.198921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.140 qpair failed and we were unable to recover it. 00:30:02.140 [2024-11-06 14:11:48.208678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.140 [2024-11-06 14:11:48.208723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.140 [2024-11-06 14:11:48.208737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.140 [2024-11-06 14:11:48.208748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.140 [2024-11-06 14:11:48.208755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.140 [2024-11-06 14:11:48.208770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.140 qpair failed and we were unable to recover it. 00:30:02.140 [2024-11-06 14:11:48.218837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.140 [2024-11-06 14:11:48.218884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.140 [2024-11-06 14:11:48.218897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.140 [2024-11-06 14:11:48.218904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.140 [2024-11-06 14:11:48.218910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.140 [2024-11-06 14:11:48.218924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.140 qpair failed and we were unable to recover it. 00:30:02.140 [2024-11-06 14:11:48.228882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.140 [2024-11-06 14:11:48.228930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.140 [2024-11-06 14:11:48.228944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.140 [2024-11-06 14:11:48.228951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.140 [2024-11-06 14:11:48.228957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.140 [2024-11-06 14:11:48.228971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.140 qpair failed and we were unable to recover it. 00:30:02.140 [2024-11-06 14:11:48.238895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.140 [2024-11-06 14:11:48.238937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.140 [2024-11-06 14:11:48.238950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.140 [2024-11-06 14:11:48.238957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.140 [2024-11-06 14:11:48.238963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.140 [2024-11-06 14:11:48.238981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.140 qpair failed and we were unable to recover it. 00:30:02.140 [2024-11-06 14:11:48.248774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.140 [2024-11-06 14:11:48.248814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.140 [2024-11-06 14:11:48.248827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.140 [2024-11-06 14:11:48.248833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.140 [2024-11-06 14:11:48.248840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.140 [2024-11-06 14:11:48.248854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.140 qpair failed and we were unable to recover it. 00:30:02.140 [2024-11-06 14:11:48.258852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.140 [2024-11-06 14:11:48.258915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.140 [2024-11-06 14:11:48.258928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.140 [2024-11-06 14:11:48.258935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.140 [2024-11-06 14:11:48.258941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.140 [2024-11-06 14:11:48.258954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.140 qpair failed and we were unable to recover it. 00:30:02.140 [2024-11-06 14:11:48.269030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.140 [2024-11-06 14:11:48.269075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.140 [2024-11-06 14:11:48.269088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.140 [2024-11-06 14:11:48.269095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.140 [2024-11-06 14:11:48.269101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.140 [2024-11-06 14:11:48.269115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.140 qpair failed and we were unable to recover it. 00:30:02.140 [2024-11-06 14:11:48.278870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.140 [2024-11-06 14:11:48.278919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.140 [2024-11-06 14:11:48.278933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.140 [2024-11-06 14:11:48.278940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.140 [2024-11-06 14:11:48.278946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.140 [2024-11-06 14:11:48.278965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.140 qpair failed and we were unable to recover it. 00:30:02.140 [2024-11-06 14:11:48.289000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.140 [2024-11-06 14:11:48.289044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.140 [2024-11-06 14:11:48.289057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.140 [2024-11-06 14:11:48.289064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.140 [2024-11-06 14:11:48.289070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.140 [2024-11-06 14:11:48.289084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.140 qpair failed and we were unable to recover it. 00:30:02.140 [2024-11-06 14:11:48.299040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.140 [2024-11-06 14:11:48.299104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.140 [2024-11-06 14:11:48.299117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.140 [2024-11-06 14:11:48.299124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.140 [2024-11-06 14:11:48.299130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.140 [2024-11-06 14:11:48.299144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.140 qpair failed and we were unable to recover it. 00:30:02.140 [2024-11-06 14:11:48.309076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.140 [2024-11-06 14:11:48.309128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.140 [2024-11-06 14:11:48.309141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.141 [2024-11-06 14:11:48.309148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.141 [2024-11-06 14:11:48.309154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.141 [2024-11-06 14:11:48.309168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.141 qpair failed and we were unable to recover it. 00:30:02.141 [2024-11-06 14:11:48.319105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.141 [2024-11-06 14:11:48.319151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.141 [2024-11-06 14:11:48.319164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.141 [2024-11-06 14:11:48.319171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.141 [2024-11-06 14:11:48.319177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.141 [2024-11-06 14:11:48.319191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.141 qpair failed and we were unable to recover it. 00:30:02.141 [2024-11-06 14:11:48.329139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.141 [2024-11-06 14:11:48.329183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.141 [2024-11-06 14:11:48.329203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.141 [2024-11-06 14:11:48.329210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.141 [2024-11-06 14:11:48.329217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.141 [2024-11-06 14:11:48.329230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.141 qpair failed and we were unable to recover it. 00:30:02.141 [2024-11-06 14:11:48.339148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.141 [2024-11-06 14:11:48.339202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.141 [2024-11-06 14:11:48.339215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.141 [2024-11-06 14:11:48.339222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.141 [2024-11-06 14:11:48.339228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.141 [2024-11-06 14:11:48.339242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.141 qpair failed and we were unable to recover it. 00:30:02.141 [2024-11-06 14:11:48.349190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.141 [2024-11-06 14:11:48.349240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.141 [2024-11-06 14:11:48.349253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.141 [2024-11-06 14:11:48.349260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.141 [2024-11-06 14:11:48.349266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.141 [2024-11-06 14:11:48.349280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.141 qpair failed and we were unable to recover it. 00:30:02.141 [2024-11-06 14:11:48.359215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.141 [2024-11-06 14:11:48.359265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.141 [2024-11-06 14:11:48.359278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.141 [2024-11-06 14:11:48.359284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.141 [2024-11-06 14:11:48.359291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.141 [2024-11-06 14:11:48.359304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.141 qpair failed and we were unable to recover it. 00:30:02.141 [2024-11-06 14:11:48.369233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.141 [2024-11-06 14:11:48.369280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.141 [2024-11-06 14:11:48.369292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.141 [2024-11-06 14:11:48.369300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.141 [2024-11-06 14:11:48.369310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.141 [2024-11-06 14:11:48.369324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.141 qpair failed and we were unable to recover it. 00:30:02.141 [2024-11-06 14:11:48.379263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.141 [2024-11-06 14:11:48.379307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.141 [2024-11-06 14:11:48.379320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.141 [2024-11-06 14:11:48.379327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.141 [2024-11-06 14:11:48.379334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.141 [2024-11-06 14:11:48.379347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.141 qpair failed and we were unable to recover it. 00:30:02.141 [2024-11-06 14:11:48.389300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.141 [2024-11-06 14:11:48.389346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.141 [2024-11-06 14:11:48.389360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.141 [2024-11-06 14:11:48.389367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.141 [2024-11-06 14:11:48.389374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.141 [2024-11-06 14:11:48.389388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.141 qpair failed and we were unable to recover it. 00:30:02.141 [2024-11-06 14:11:48.399319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.141 [2024-11-06 14:11:48.399364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.141 [2024-11-06 14:11:48.399377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.141 [2024-11-06 14:11:48.399384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.141 [2024-11-06 14:11:48.399390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.141 [2024-11-06 14:11:48.399404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.141 qpair failed and we were unable to recover it. 00:30:02.141 [2024-11-06 14:11:48.409400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.141 [2024-11-06 14:11:48.409486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.141 [2024-11-06 14:11:48.409499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.141 [2024-11-06 14:11:48.409506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.141 [2024-11-06 14:11:48.409513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.141 [2024-11-06 14:11:48.409526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.141 qpair failed and we were unable to recover it. 00:30:02.406 [2024-11-06 14:11:48.419379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.406 [2024-11-06 14:11:48.419431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.406 [2024-11-06 14:11:48.419455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.406 [2024-11-06 14:11:48.419464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.406 [2024-11-06 14:11:48.419471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.406 [2024-11-06 14:11:48.419490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-11-06 14:11:48.429405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.406 [2024-11-06 14:11:48.429459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.406 [2024-11-06 14:11:48.429484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.406 [2024-11-06 14:11:48.429493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.406 [2024-11-06 14:11:48.429500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.406 [2024-11-06 14:11:48.429520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-11-06 14:11:48.439292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.406 [2024-11-06 14:11:48.439342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.406 [2024-11-06 14:11:48.439357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.406 [2024-11-06 14:11:48.439364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.406 [2024-11-06 14:11:48.439371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.406 [2024-11-06 14:11:48.439386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-11-06 14:11:48.449326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.406 [2024-11-06 14:11:48.449371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.406 [2024-11-06 14:11:48.449386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.406 [2024-11-06 14:11:48.449393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.406 [2024-11-06 14:11:48.449399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.406 [2024-11-06 14:11:48.449415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-11-06 14:11:48.459467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.406 [2024-11-06 14:11:48.459512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.406 [2024-11-06 14:11:48.459530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.406 [2024-11-06 14:11:48.459537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.406 [2024-11-06 14:11:48.459543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.406 [2024-11-06 14:11:48.459558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-11-06 14:11:48.469510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.406 [2024-11-06 14:11:48.469582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.406 [2024-11-06 14:11:48.469595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.406 [2024-11-06 14:11:48.469603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.406 [2024-11-06 14:11:48.469609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.406 [2024-11-06 14:11:48.469623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-11-06 14:11:48.479545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.406 [2024-11-06 14:11:48.479589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.406 [2024-11-06 14:11:48.479602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.406 [2024-11-06 14:11:48.479609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.406 [2024-11-06 14:11:48.479616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.406 [2024-11-06 14:11:48.479629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-11-06 14:11:48.489419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.406 [2024-11-06 14:11:48.489463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.407 [2024-11-06 14:11:48.489476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.407 [2024-11-06 14:11:48.489483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.407 [2024-11-06 14:11:48.489489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.407 [2024-11-06 14:11:48.489503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-11-06 14:11:48.499456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.407 [2024-11-06 14:11:48.499509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.407 [2024-11-06 14:11:48.499524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.407 [2024-11-06 14:11:48.499534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.407 [2024-11-06 14:11:48.499540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.407 [2024-11-06 14:11:48.499561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-11-06 14:11:48.509627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.407 [2024-11-06 14:11:48.509676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.407 [2024-11-06 14:11:48.509689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.407 [2024-11-06 14:11:48.509696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.407 [2024-11-06 14:11:48.509703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.407 [2024-11-06 14:11:48.509717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-11-06 14:11:48.519633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.407 [2024-11-06 14:11:48.519690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.407 [2024-11-06 14:11:48.519704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.407 [2024-11-06 14:11:48.519710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.407 [2024-11-06 14:11:48.519718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.407 [2024-11-06 14:11:48.519731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-11-06 14:11:48.529659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.407 [2024-11-06 14:11:48.529704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.407 [2024-11-06 14:11:48.529718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.407 [2024-11-06 14:11:48.529725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.407 [2024-11-06 14:11:48.529731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.407 [2024-11-06 14:11:48.529748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-11-06 14:11:48.539688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.407 [2024-11-06 14:11:48.539730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.407 [2024-11-06 14:11:48.539743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.407 [2024-11-06 14:11:48.539754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.407 [2024-11-06 14:11:48.539761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.407 [2024-11-06 14:11:48.539775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-11-06 14:11:48.549723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.407 [2024-11-06 14:11:48.549835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.407 [2024-11-06 14:11:48.549848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.407 [2024-11-06 14:11:48.549855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.407 [2024-11-06 14:11:48.549861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.407 [2024-11-06 14:11:48.549875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-11-06 14:11:48.559739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.407 [2024-11-06 14:11:48.559820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.407 [2024-11-06 14:11:48.559833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.407 [2024-11-06 14:11:48.559840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.407 [2024-11-06 14:11:48.559846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.407 [2024-11-06 14:11:48.559860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-11-06 14:11:48.569779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.407 [2024-11-06 14:11:48.569823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.407 [2024-11-06 14:11:48.569836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.407 [2024-11-06 14:11:48.569843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.407 [2024-11-06 14:11:48.569849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.407 [2024-11-06 14:11:48.569863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-11-06 14:11:48.579757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.407 [2024-11-06 14:11:48.579807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.407 [2024-11-06 14:11:48.579820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.407 [2024-11-06 14:11:48.579828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.407 [2024-11-06 14:11:48.579834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.407 [2024-11-06 14:11:48.579848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-11-06 14:11:48.589835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.407 [2024-11-06 14:11:48.589887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.407 [2024-11-06 14:11:48.589900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.407 [2024-11-06 14:11:48.589907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.407 [2024-11-06 14:11:48.589914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.407 [2024-11-06 14:11:48.589928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-11-06 14:11:48.599852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.407 [2024-11-06 14:11:48.599895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.407 [2024-11-06 14:11:48.599908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.407 [2024-11-06 14:11:48.599915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.407 [2024-11-06 14:11:48.599921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.407 [2024-11-06 14:11:48.599935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-11-06 14:11:48.609888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.407 [2024-11-06 14:11:48.609987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.407 [2024-11-06 14:11:48.610001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.407 [2024-11-06 14:11:48.610008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.407 [2024-11-06 14:11:48.610014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.407 [2024-11-06 14:11:48.610028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-11-06 14:11:48.619892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.407 [2024-11-06 14:11:48.619937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.407 [2024-11-06 14:11:48.619950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.408 [2024-11-06 14:11:48.619958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.408 [2024-11-06 14:11:48.619965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.408 [2024-11-06 14:11:48.619980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-11-06 14:11:48.629802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.408 [2024-11-06 14:11:48.629852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.408 [2024-11-06 14:11:48.629866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.408 [2024-11-06 14:11:48.629876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.408 [2024-11-06 14:11:48.629882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.408 [2024-11-06 14:11:48.629896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-11-06 14:11:48.639937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.408 [2024-11-06 14:11:48.639987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.408 [2024-11-06 14:11:48.640001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.408 [2024-11-06 14:11:48.640008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.408 [2024-11-06 14:11:48.640014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.408 [2024-11-06 14:11:48.640028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-11-06 14:11:48.650042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.408 [2024-11-06 14:11:48.650086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.408 [2024-11-06 14:11:48.650098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.408 [2024-11-06 14:11:48.650105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.408 [2024-11-06 14:11:48.650111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.408 [2024-11-06 14:11:48.650125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-11-06 14:11:48.659998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.408 [2024-11-06 14:11:48.660049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.408 [2024-11-06 14:11:48.660063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.408 [2024-11-06 14:11:48.660070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.408 [2024-11-06 14:11:48.660078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.408 [2024-11-06 14:11:48.660094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-11-06 14:11:48.670065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.408 [2024-11-06 14:11:48.670112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.408 [2024-11-06 14:11:48.670125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.408 [2024-11-06 14:11:48.670132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.408 [2024-11-06 14:11:48.670139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.408 [2024-11-06 14:11:48.670156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-11-06 14:11:48.680069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.408 [2024-11-06 14:11:48.680111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.408 [2024-11-06 14:11:48.680124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.408 [2024-11-06 14:11:48.680131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.408 [2024-11-06 14:11:48.680137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.408 [2024-11-06 14:11:48.680151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.671 [2024-11-06 14:11:48.690076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.690132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.690145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.690151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.690158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.690171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.699989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.700035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.700048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.700055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.700061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.700075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.710204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.710276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.710290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.710297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.710303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.710318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.720170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.720218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.720232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.720239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.720245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.720259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.730182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.730224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.730237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.730244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.730250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.730264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.740189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.740237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.740250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.740257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.740263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.740277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.750265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.750315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.750328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.750335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.750341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.750355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.760270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.760321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.760341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.760349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.760356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.760371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.770310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.770352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.770365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.770372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.770379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.770393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.780302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.780348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.780361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.780368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.780375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.780388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.790365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.790414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.790427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.790434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.790440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.790454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.800381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.800464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.800477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.800484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.800494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.800508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.810420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.810463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.810476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.810483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.810490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.810505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.820453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.820505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.820530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.820538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.820546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.820565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.830481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.830581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.830606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.830615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.830623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.830643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.840503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.840547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.840562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.840569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.840575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.840590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.850469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.850540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.850553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.850560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.850567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.850581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.860548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.860636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.860649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.860656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.860663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.860677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.870583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.870633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.870646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.870655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.870661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.870675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.880568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.880611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.880624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.880631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.880638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.880651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.890642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.890691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.890708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.890715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.890721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.890735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.900656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.900701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.900714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.900721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.900727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.900741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.910687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.910784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.910797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.910804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.910810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.910824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.920683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.920728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.920741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.920752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.920759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.920773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.672 qpair failed and we were unable to recover it. 00:30:02.672 [2024-11-06 14:11:48.930730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.672 [2024-11-06 14:11:48.930773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.672 [2024-11-06 14:11:48.930787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.672 [2024-11-06 14:11:48.930794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.672 [2024-11-06 14:11:48.930804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.672 [2024-11-06 14:11:48.930818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.673 [2024-11-06 14:11:48.940770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.673 [2024-11-06 14:11:48.940820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.673 [2024-11-06 14:11:48.940833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.673 [2024-11-06 14:11:48.940840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.673 [2024-11-06 14:11:48.940847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.673 [2024-11-06 14:11:48.940861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.673 qpair failed and we were unable to recover it. 00:30:02.934 [2024-11-06 14:11:48.950782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.934 [2024-11-06 14:11:48.950831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.934 [2024-11-06 14:11:48.950844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.934 [2024-11-06 14:11:48.950851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.934 [2024-11-06 14:11:48.950858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.934 [2024-11-06 14:11:48.950872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.934 qpair failed and we were unable to recover it. 00:30:02.934 [2024-11-06 14:11:48.960832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.934 [2024-11-06 14:11:48.960918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.934 [2024-11-06 14:11:48.960931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.934 [2024-11-06 14:11:48.960938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.934 [2024-11-06 14:11:48.960945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.934 [2024-11-06 14:11:48.960959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.934 qpair failed and we were unable to recover it. 00:30:02.934 [2024-11-06 14:11:48.970843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.934 [2024-11-06 14:11:48.970938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.934 [2024-11-06 14:11:48.970951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.934 [2024-11-06 14:11:48.970958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.934 [2024-11-06 14:11:48.970965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.934 [2024-11-06 14:11:48.970978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.934 qpair failed and we were unable to recover it. 00:30:02.934 [2024-11-06 14:11:48.980846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.934 [2024-11-06 14:11:48.980922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.934 [2024-11-06 14:11:48.980935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.934 [2024-11-06 14:11:48.980942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.934 [2024-11-06 14:11:48.980949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.934 [2024-11-06 14:11:48.980963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.934 qpair failed and we were unable to recover it. 00:30:02.934 [2024-11-06 14:11:48.990779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.934 [2024-11-06 14:11:48.990830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.934 [2024-11-06 14:11:48.990844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.934 [2024-11-06 14:11:48.990851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.934 [2024-11-06 14:11:48.990857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.934 [2024-11-06 14:11:48.990871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.934 qpair failed and we were unable to recover it. 00:30:02.934 [2024-11-06 14:11:49.000925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.934 [2024-11-06 14:11:49.001002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.934 [2024-11-06 14:11:49.001016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.934 [2024-11-06 14:11:49.001023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.934 [2024-11-06 14:11:49.001029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.934 [2024-11-06 14:11:49.001043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.934 qpair failed and we were unable to recover it. 00:30:02.934 [2024-11-06 14:11:49.010980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.934 [2024-11-06 14:11:49.011024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.934 [2024-11-06 14:11:49.011037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.934 [2024-11-06 14:11:49.011044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.934 [2024-11-06 14:11:49.011050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.934 [2024-11-06 14:11:49.011064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.934 qpair failed and we were unable to recover it. 00:30:02.934 [2024-11-06 14:11:49.020985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.934 [2024-11-06 14:11:49.021029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.934 [2024-11-06 14:11:49.021050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.934 [2024-11-06 14:11:49.021057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.934 [2024-11-06 14:11:49.021063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.934 [2024-11-06 14:11:49.021077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.934 qpair failed and we were unable to recover it. 00:30:02.934 [2024-11-06 14:11:49.031025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.934 [2024-11-06 14:11:49.031078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.934 [2024-11-06 14:11:49.031091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.934 [2024-11-06 14:11:49.031098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.934 [2024-11-06 14:11:49.031104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.934 [2024-11-06 14:11:49.031118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.934 qpair failed and we were unable to recover it. 00:30:02.934 [2024-11-06 14:11:49.041043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.934 [2024-11-06 14:11:49.041087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.934 [2024-11-06 14:11:49.041100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.934 [2024-11-06 14:11:49.041107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.934 [2024-11-06 14:11:49.041113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.934 [2024-11-06 14:11:49.041127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.934 qpair failed and we were unable to recover it. 00:30:02.934 [2024-11-06 14:11:49.051033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.934 [2024-11-06 14:11:49.051074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.934 [2024-11-06 14:11:49.051087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.934 [2024-11-06 14:11:49.051094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.934 [2024-11-06 14:11:49.051100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.934 [2024-11-06 14:11:49.051114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.934 qpair failed and we were unable to recover it. 00:30:02.934 [2024-11-06 14:11:49.061053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.935 [2024-11-06 14:11:49.061097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.935 [2024-11-06 14:11:49.061110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.935 [2024-11-06 14:11:49.061120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.935 [2024-11-06 14:11:49.061127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.935 [2024-11-06 14:11:49.061140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-11-06 14:11:49.071123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.935 [2024-11-06 14:11:49.071172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.935 [2024-11-06 14:11:49.071187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.935 [2024-11-06 14:11:49.071194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.935 [2024-11-06 14:11:49.071200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.935 [2024-11-06 14:11:49.071218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-11-06 14:11:49.081131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.935 [2024-11-06 14:11:49.081175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.935 [2024-11-06 14:11:49.081189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.935 [2024-11-06 14:11:49.081196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.935 [2024-11-06 14:11:49.081202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.935 [2024-11-06 14:11:49.081216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-11-06 14:11:49.091161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.935 [2024-11-06 14:11:49.091206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.935 [2024-11-06 14:11:49.091219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.935 [2024-11-06 14:11:49.091227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.935 [2024-11-06 14:11:49.091233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.935 [2024-11-06 14:11:49.091247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-11-06 14:11:49.101198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.935 [2024-11-06 14:11:49.101271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.935 [2024-11-06 14:11:49.101283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.935 [2024-11-06 14:11:49.101290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.935 [2024-11-06 14:11:49.101296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.935 [2024-11-06 14:11:49.101310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-11-06 14:11:49.111200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.935 [2024-11-06 14:11:49.111249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.935 [2024-11-06 14:11:49.111262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.935 [2024-11-06 14:11:49.111269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.935 [2024-11-06 14:11:49.111275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.935 [2024-11-06 14:11:49.111290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-11-06 14:11:49.121104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.935 [2024-11-06 14:11:49.121150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.935 [2024-11-06 14:11:49.121163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.935 [2024-11-06 14:11:49.121170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.935 [2024-11-06 14:11:49.121176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.935 [2024-11-06 14:11:49.121190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-11-06 14:11:49.131129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.935 [2024-11-06 14:11:49.131173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.935 [2024-11-06 14:11:49.131186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.935 [2024-11-06 14:11:49.131193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.935 [2024-11-06 14:11:49.131199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.935 [2024-11-06 14:11:49.131213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-11-06 14:11:49.141180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.935 [2024-11-06 14:11:49.141238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.935 [2024-11-06 14:11:49.141251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.935 [2024-11-06 14:11:49.141258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.935 [2024-11-06 14:11:49.141264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.935 [2024-11-06 14:11:49.141278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-11-06 14:11:49.151205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.935 [2024-11-06 14:11:49.151255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.935 [2024-11-06 14:11:49.151269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.935 [2024-11-06 14:11:49.151276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.935 [2024-11-06 14:11:49.151283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.935 [2024-11-06 14:11:49.151297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-11-06 14:11:49.161370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.935 [2024-11-06 14:11:49.161416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.935 [2024-11-06 14:11:49.161429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.935 [2024-11-06 14:11:49.161436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.935 [2024-11-06 14:11:49.161443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.935 [2024-11-06 14:11:49.161457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-11-06 14:11:49.171362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.935 [2024-11-06 14:11:49.171404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.935 [2024-11-06 14:11:49.171417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.935 [2024-11-06 14:11:49.171424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.935 [2024-11-06 14:11:49.171430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.935 [2024-11-06 14:11:49.171444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-11-06 14:11:49.181389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.935 [2024-11-06 14:11:49.181432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.935 [2024-11-06 14:11:49.181445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.935 [2024-11-06 14:11:49.181452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.935 [2024-11-06 14:11:49.181458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.935 [2024-11-06 14:11:49.181472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-11-06 14:11:49.191437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.935 [2024-11-06 14:11:49.191482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.935 [2024-11-06 14:11:49.191495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.935 [2024-11-06 14:11:49.191505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.935 [2024-11-06 14:11:49.191512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.935 [2024-11-06 14:11:49.191526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-11-06 14:11:49.201465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.935 [2024-11-06 14:11:49.201560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.935 [2024-11-06 14:11:49.201573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.935 [2024-11-06 14:11:49.201580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.935 [2024-11-06 14:11:49.201587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:02.935 [2024-11-06 14:11:49.201600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.935 qpair failed and we were unable to recover it. 00:30:03.197 [2024-11-06 14:11:49.211484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.197 [2024-11-06 14:11:49.211571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.197 [2024-11-06 14:11:49.211584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.197 [2024-11-06 14:11:49.211591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.197 [2024-11-06 14:11:49.211598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.197 [2024-11-06 14:11:49.211612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.197 qpair failed and we were unable to recover it. 00:30:03.197 [2024-11-06 14:11:49.221497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.197 [2024-11-06 14:11:49.221540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.197 [2024-11-06 14:11:49.221553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.197 [2024-11-06 14:11:49.221561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.197 [2024-11-06 14:11:49.221567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.197 [2024-11-06 14:11:49.221581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.197 qpair failed and we were unable to recover it. 00:30:03.197 [2024-11-06 14:11:49.231561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.197 [2024-11-06 14:11:49.231617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.197 [2024-11-06 14:11:49.231630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.197 [2024-11-06 14:11:49.231637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.197 [2024-11-06 14:11:49.231644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.197 [2024-11-06 14:11:49.231661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.197 qpair failed and we were unable to recover it. 00:30:03.197 [2024-11-06 14:11:49.241585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.197 [2024-11-06 14:11:49.241627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.197 [2024-11-06 14:11:49.241640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.197 [2024-11-06 14:11:49.241646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.197 [2024-11-06 14:11:49.241653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.197 [2024-11-06 14:11:49.241667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.197 qpair failed and we were unable to recover it. 00:30:03.197 [2024-11-06 14:11:49.251610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.197 [2024-11-06 14:11:49.251665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.197 [2024-11-06 14:11:49.251678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.197 [2024-11-06 14:11:49.251685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.197 [2024-11-06 14:11:49.251691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.197 [2024-11-06 14:11:49.251705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.197 qpair failed and we were unable to recover it. 00:30:03.197 [2024-11-06 14:11:49.261628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.197 [2024-11-06 14:11:49.261688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.197 [2024-11-06 14:11:49.261701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.197 [2024-11-06 14:11:49.261708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.197 [2024-11-06 14:11:49.261714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.197 [2024-11-06 14:11:49.261728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.197 qpair failed and we were unable to recover it. 00:30:03.197 [2024-11-06 14:11:49.271660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.197 [2024-11-06 14:11:49.271711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.197 [2024-11-06 14:11:49.271724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.197 [2024-11-06 14:11:49.271731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.198 [2024-11-06 14:11:49.271737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.198 [2024-11-06 14:11:49.271755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.198 qpair failed and we were unable to recover it. 00:30:03.198 [2024-11-06 14:11:49.281658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.198 [2024-11-06 14:11:49.281701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.198 [2024-11-06 14:11:49.281714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.198 [2024-11-06 14:11:49.281720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.198 [2024-11-06 14:11:49.281727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.198 [2024-11-06 14:11:49.281741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.198 qpair failed and we were unable to recover it. 00:30:03.198 [2024-11-06 14:11:49.291765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.198 [2024-11-06 14:11:49.291818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.198 [2024-11-06 14:11:49.291831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.198 [2024-11-06 14:11:49.291838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.198 [2024-11-06 14:11:49.291844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.198 [2024-11-06 14:11:49.291858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.198 qpair failed and we were unable to recover it. 00:30:03.198 [2024-11-06 14:11:49.301741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.198 [2024-11-06 14:11:49.301793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.198 [2024-11-06 14:11:49.301806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.198 [2024-11-06 14:11:49.301813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.198 [2024-11-06 14:11:49.301820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.198 [2024-11-06 14:11:49.301834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.198 qpair failed and we were unable to recover it. 00:30:03.198 [2024-11-06 14:11:49.311765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.198 [2024-11-06 14:11:49.311813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.198 [2024-11-06 14:11:49.311826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.198 [2024-11-06 14:11:49.311833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.198 [2024-11-06 14:11:49.311839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.198 [2024-11-06 14:11:49.311853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.198 qpair failed and we were unable to recover it. 00:30:03.198 [2024-11-06 14:11:49.321651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.198 [2024-11-06 14:11:49.321693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.198 [2024-11-06 14:11:49.321709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.198 [2024-11-06 14:11:49.321716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.198 [2024-11-06 14:11:49.321723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.198 [2024-11-06 14:11:49.321737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.198 qpair failed and we were unable to recover it. 00:30:03.198 [2024-11-06 14:11:49.331808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.198 [2024-11-06 14:11:49.331895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.198 [2024-11-06 14:11:49.331908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.198 [2024-11-06 14:11:49.331915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.198 [2024-11-06 14:11:49.331922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.198 [2024-11-06 14:11:49.331936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.198 qpair failed and we were unable to recover it. 00:30:03.198 [2024-11-06 14:11:49.341711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.198 [2024-11-06 14:11:49.341762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.198 [2024-11-06 14:11:49.341775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.198 [2024-11-06 14:11:49.341782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.198 [2024-11-06 14:11:49.341789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.198 [2024-11-06 14:11:49.341803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.198 qpair failed and we were unable to recover it. 00:30:03.198 [2024-11-06 14:11:49.351752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.198 [2024-11-06 14:11:49.351805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.198 [2024-11-06 14:11:49.351819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.198 [2024-11-06 14:11:49.351826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.198 [2024-11-06 14:11:49.351833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.198 [2024-11-06 14:11:49.351851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.198 qpair failed and we were unable to recover it. 00:30:03.198 [2024-11-06 14:11:49.361895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.198 [2024-11-06 14:11:49.361939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.198 [2024-11-06 14:11:49.361953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.198 [2024-11-06 14:11:49.361960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.198 [2024-11-06 14:11:49.361971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.198 [2024-11-06 14:11:49.361986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.198 qpair failed and we were unable to recover it. 00:30:03.198 [2024-11-06 14:11:49.371913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.198 [2024-11-06 14:11:49.371955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.198 [2024-11-06 14:11:49.371967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.198 [2024-11-06 14:11:49.371975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.198 [2024-11-06 14:11:49.371981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.198 [2024-11-06 14:11:49.371995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.198 qpair failed and we were unable to recover it. 00:30:03.198 [2024-11-06 14:11:49.381939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.198 [2024-11-06 14:11:49.381983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.198 [2024-11-06 14:11:49.381996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.198 [2024-11-06 14:11:49.382003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.198 [2024-11-06 14:11:49.382009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.198 [2024-11-06 14:11:49.382023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.198 qpair failed and we were unable to recover it. 00:30:03.198 [2024-11-06 14:11:49.391991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.198 [2024-11-06 14:11:49.392039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.198 [2024-11-06 14:11:49.392051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.198 [2024-11-06 14:11:49.392058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.198 [2024-11-06 14:11:49.392064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.198 [2024-11-06 14:11:49.392078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.198 qpair failed and we were unable to recover it. 00:30:03.198 [2024-11-06 14:11:49.402008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.199 [2024-11-06 14:11:49.402057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.199 [2024-11-06 14:11:49.402070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.199 [2024-11-06 14:11:49.402077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.199 [2024-11-06 14:11:49.402083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.199 [2024-11-06 14:11:49.402096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.199 qpair failed and we were unable to recover it. 00:30:03.199 [2024-11-06 14:11:49.412036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.199 [2024-11-06 14:11:49.412076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.199 [2024-11-06 14:11:49.412089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.199 [2024-11-06 14:11:49.412096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.199 [2024-11-06 14:11:49.412102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.199 [2024-11-06 14:11:49.412116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.199 qpair failed and we were unable to recover it. 00:30:03.199 [2024-11-06 14:11:49.422060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.199 [2024-11-06 14:11:49.422104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.199 [2024-11-06 14:11:49.422117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.199 [2024-11-06 14:11:49.422125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.199 [2024-11-06 14:11:49.422131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.199 [2024-11-06 14:11:49.422144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.199 qpair failed and we were unable to recover it. 00:30:03.199 [2024-11-06 14:11:49.432098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.199 [2024-11-06 14:11:49.432145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.199 [2024-11-06 14:11:49.432157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.199 [2024-11-06 14:11:49.432164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.199 [2024-11-06 14:11:49.432171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.199 [2024-11-06 14:11:49.432184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.199 qpair failed and we were unable to recover it. 00:30:03.199 [2024-11-06 14:11:49.442102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.199 [2024-11-06 14:11:49.442156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.199 [2024-11-06 14:11:49.442170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.199 [2024-11-06 14:11:49.442179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.199 [2024-11-06 14:11:49.442189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.199 [2024-11-06 14:11:49.442205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.199 qpair failed and we were unable to recover it. 00:30:03.199 [2024-11-06 14:11:49.452143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.199 [2024-11-06 14:11:49.452186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.199 [2024-11-06 14:11:49.452202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.199 [2024-11-06 14:11:49.452209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.199 [2024-11-06 14:11:49.452215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.199 [2024-11-06 14:11:49.452229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.199 qpair failed and we were unable to recover it. 00:30:03.199 [2024-11-06 14:11:49.462065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.199 [2024-11-06 14:11:49.462119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.199 [2024-11-06 14:11:49.462131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.199 [2024-11-06 14:11:49.462138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.199 [2024-11-06 14:11:49.462145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.199 [2024-11-06 14:11:49.462159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.199 qpair failed and we were unable to recover it. 00:30:03.199 [2024-11-06 14:11:49.472063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.199 [2024-11-06 14:11:49.472112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.199 [2024-11-06 14:11:49.472125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.199 [2024-11-06 14:11:49.472132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.199 [2024-11-06 14:11:49.472138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.199 [2024-11-06 14:11:49.472152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.199 qpair failed and we were unable to recover it. 00:30:03.461 [2024-11-06 14:11:49.482220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.461 [2024-11-06 14:11:49.482262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.461 [2024-11-06 14:11:49.482275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.461 [2024-11-06 14:11:49.482282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.461 [2024-11-06 14:11:49.482289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.461 [2024-11-06 14:11:49.482302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.461 qpair failed and we were unable to recover it. 00:30:03.461 [2024-11-06 14:11:49.492117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.461 [2024-11-06 14:11:49.492178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.461 [2024-11-06 14:11:49.492191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.461 [2024-11-06 14:11:49.492199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.461 [2024-11-06 14:11:49.492209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.461 [2024-11-06 14:11:49.492223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.502285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.502333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.462 [2024-11-06 14:11:49.502346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.462 [2024-11-06 14:11:49.502353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.462 [2024-11-06 14:11:49.502359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.462 [2024-11-06 14:11:49.502373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.512300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.512350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.462 [2024-11-06 14:11:49.512363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.462 [2024-11-06 14:11:49.512370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.462 [2024-11-06 14:11:49.512377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.462 [2024-11-06 14:11:49.512391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.522214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.522261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.462 [2024-11-06 14:11:49.522273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.462 [2024-11-06 14:11:49.522280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.462 [2024-11-06 14:11:49.522286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.462 [2024-11-06 14:11:49.522300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.532345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.532389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.462 [2024-11-06 14:11:49.532402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.462 [2024-11-06 14:11:49.532409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.462 [2024-11-06 14:11:49.532415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.462 [2024-11-06 14:11:49.532429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.542384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.542435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.462 [2024-11-06 14:11:49.542448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.462 [2024-11-06 14:11:49.542455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.462 [2024-11-06 14:11:49.542461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.462 [2024-11-06 14:11:49.542475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.552410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.552462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.462 [2024-11-06 14:11:49.552475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.462 [2024-11-06 14:11:49.552482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.462 [2024-11-06 14:11:49.552488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.462 [2024-11-06 14:11:49.552502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.562443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.562492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.462 [2024-11-06 14:11:49.562505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.462 [2024-11-06 14:11:49.562512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.462 [2024-11-06 14:11:49.562518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.462 [2024-11-06 14:11:49.562532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.572461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.572503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.462 [2024-11-06 14:11:49.572516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.462 [2024-11-06 14:11:49.572523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.462 [2024-11-06 14:11:49.572530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.462 [2024-11-06 14:11:49.572544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.582489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.582569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.462 [2024-11-06 14:11:49.582584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.462 [2024-11-06 14:11:49.582591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.462 [2024-11-06 14:11:49.582598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.462 [2024-11-06 14:11:49.582611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.592527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.592575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.462 [2024-11-06 14:11:49.592588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.462 [2024-11-06 14:11:49.592595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.462 [2024-11-06 14:11:49.592601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.462 [2024-11-06 14:11:49.592615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.602525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.602569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.462 [2024-11-06 14:11:49.602582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.462 [2024-11-06 14:11:49.602589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.462 [2024-11-06 14:11:49.602595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.462 [2024-11-06 14:11:49.602608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.612570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.612612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.462 [2024-11-06 14:11:49.612625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.462 [2024-11-06 14:11:49.612632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.462 [2024-11-06 14:11:49.612638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.462 [2024-11-06 14:11:49.612652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.622593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.622640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.462 [2024-11-06 14:11:49.622654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.462 [2024-11-06 14:11:49.622664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.462 [2024-11-06 14:11:49.622670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.462 [2024-11-06 14:11:49.622684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.632677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.632725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.462 [2024-11-06 14:11:49.632738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.462 [2024-11-06 14:11:49.632749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.462 [2024-11-06 14:11:49.632756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.462 [2024-11-06 14:11:49.632770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.642646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.642697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.462 [2024-11-06 14:11:49.642709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.462 [2024-11-06 14:11:49.642716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.462 [2024-11-06 14:11:49.642723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.462 [2024-11-06 14:11:49.642736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.652676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.652725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.462 [2024-11-06 14:11:49.652737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.462 [2024-11-06 14:11:49.652747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.462 [2024-11-06 14:11:49.652754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.462 [2024-11-06 14:11:49.652768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.662579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.662624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.462 [2024-11-06 14:11:49.662638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.462 [2024-11-06 14:11:49.662645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.462 [2024-11-06 14:11:49.662651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.462 [2024-11-06 14:11:49.662674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.672750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.672808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.462 [2024-11-06 14:11:49.672822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.462 [2024-11-06 14:11:49.672828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.462 [2024-11-06 14:11:49.672835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.462 [2024-11-06 14:11:49.672849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.462 qpair failed and we were unable to recover it. 00:30:03.462 [2024-11-06 14:11:49.682776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.462 [2024-11-06 14:11:49.682822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.463 [2024-11-06 14:11:49.682835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.463 [2024-11-06 14:11:49.682842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.463 [2024-11-06 14:11:49.682848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.463 [2024-11-06 14:11:49.682862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.463 qpair failed and we were unable to recover it. 00:30:03.463 [2024-11-06 14:11:49.692760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.463 [2024-11-06 14:11:49.692806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.463 [2024-11-06 14:11:49.692819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.463 [2024-11-06 14:11:49.692826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.463 [2024-11-06 14:11:49.692833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.463 [2024-11-06 14:11:49.692846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.463 qpair failed and we were unable to recover it. 00:30:03.463 [2024-11-06 14:11:49.702867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.463 [2024-11-06 14:11:49.702913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.463 [2024-11-06 14:11:49.702926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.463 [2024-11-06 14:11:49.702933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.463 [2024-11-06 14:11:49.702939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.463 [2024-11-06 14:11:49.702953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.463 qpair failed and we were unable to recover it. 00:30:03.463 [2024-11-06 14:11:49.712886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.463 [2024-11-06 14:11:49.712975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.463 [2024-11-06 14:11:49.712989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.463 [2024-11-06 14:11:49.712996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.463 [2024-11-06 14:11:49.713002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.463 [2024-11-06 14:11:49.713016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.463 qpair failed and we were unable to recover it. 00:30:03.463 [2024-11-06 14:11:49.722849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.463 [2024-11-06 14:11:49.722890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.463 [2024-11-06 14:11:49.722903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.463 [2024-11-06 14:11:49.722910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.463 [2024-11-06 14:11:49.722916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.463 [2024-11-06 14:11:49.722930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.463 qpair failed and we were unable to recover it. 00:30:03.463 [2024-11-06 14:11:49.732885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.463 [2024-11-06 14:11:49.732924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.463 [2024-11-06 14:11:49.732938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.463 [2024-11-06 14:11:49.732945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.463 [2024-11-06 14:11:49.732952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.463 [2024-11-06 14:11:49.732967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.463 qpair failed and we were unable to recover it. 00:30:03.725 [2024-11-06 14:11:49.742985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.725 [2024-11-06 14:11:49.743044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.725 [2024-11-06 14:11:49.743057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.725 [2024-11-06 14:11:49.743064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.725 [2024-11-06 14:11:49.743071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.725 [2024-11-06 14:11:49.743085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.725 qpair failed and we were unable to recover it. 00:30:03.725 [2024-11-06 14:11:49.752972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.725 [2024-11-06 14:11:49.753027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.725 [2024-11-06 14:11:49.753039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.725 [2024-11-06 14:11:49.753053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.725 [2024-11-06 14:11:49.753059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.725 [2024-11-06 14:11:49.753073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.725 qpair failed and we were unable to recover it. 00:30:03.725 [2024-11-06 14:11:49.762981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.725 [2024-11-06 14:11:49.763026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.725 [2024-11-06 14:11:49.763039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.725 [2024-11-06 14:11:49.763045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.725 [2024-11-06 14:11:49.763052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.725 [2024-11-06 14:11:49.763066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.725 qpair failed and we were unable to recover it. 00:30:03.725 [2024-11-06 14:11:49.772978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.725 [2024-11-06 14:11:49.773022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.725 [2024-11-06 14:11:49.773035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.725 [2024-11-06 14:11:49.773042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.725 [2024-11-06 14:11:49.773048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.725 [2024-11-06 14:11:49.773062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.725 qpair failed and we were unable to recover it. 00:30:03.725 [2024-11-06 14:11:49.783044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.725 [2024-11-06 14:11:49.783100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.725 [2024-11-06 14:11:49.783113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.725 [2024-11-06 14:11:49.783120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.725 [2024-11-06 14:11:49.783126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.725 [2024-11-06 14:11:49.783140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.725 qpair failed and we were unable to recover it. 00:30:03.725 [2024-11-06 14:11:49.793043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.725 [2024-11-06 14:11:49.793088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.725 [2024-11-06 14:11:49.793100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.725 [2024-11-06 14:11:49.793107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.725 [2024-11-06 14:11:49.793114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.725 [2024-11-06 14:11:49.793131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.725 qpair failed and we were unable to recover it. 00:30:03.725 [2024-11-06 14:11:49.803100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.725 [2024-11-06 14:11:49.803149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.725 [2024-11-06 14:11:49.803162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.725 [2024-11-06 14:11:49.803169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.725 [2024-11-06 14:11:49.803176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.725 [2024-11-06 14:11:49.803189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.725 qpair failed and we were unable to recover it. 00:30:03.725 [2024-11-06 14:11:49.813116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.725 [2024-11-06 14:11:49.813160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.726 [2024-11-06 14:11:49.813173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.726 [2024-11-06 14:11:49.813180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.726 [2024-11-06 14:11:49.813186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.726 [2024-11-06 14:11:49.813200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.726 qpair failed and we were unable to recover it. 00:30:03.726 [2024-11-06 14:11:49.823154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.726 [2024-11-06 14:11:49.823202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.726 [2024-11-06 14:11:49.823215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.726 [2024-11-06 14:11:49.823222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.726 [2024-11-06 14:11:49.823228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.726 [2024-11-06 14:11:49.823242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.726 qpair failed and we were unable to recover it. 00:30:03.726 [2024-11-06 14:11:49.833207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.726 [2024-11-06 14:11:49.833253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.726 [2024-11-06 14:11:49.833266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.726 [2024-11-06 14:11:49.833274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.726 [2024-11-06 14:11:49.833280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.726 [2024-11-06 14:11:49.833293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.726 qpair failed and we were unable to recover it. 00:30:03.726 [2024-11-06 14:11:49.843207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.726 [2024-11-06 14:11:49.843277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.726 [2024-11-06 14:11:49.843290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.726 [2024-11-06 14:11:49.843297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.726 [2024-11-06 14:11:49.843303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.726 [2024-11-06 14:11:49.843317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.726 qpair failed and we were unable to recover it. 00:30:03.726 [2024-11-06 14:11:49.853232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.726 [2024-11-06 14:11:49.853278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.726 [2024-11-06 14:11:49.853291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.726 [2024-11-06 14:11:49.853297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.726 [2024-11-06 14:11:49.853304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.726 [2024-11-06 14:11:49.853317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.726 qpair failed and we were unable to recover it. 00:30:03.726 [2024-11-06 14:11:49.863258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.726 [2024-11-06 14:11:49.863301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.726 [2024-11-06 14:11:49.863315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.726 [2024-11-06 14:11:49.863322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.726 [2024-11-06 14:11:49.863329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.726 [2024-11-06 14:11:49.863342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.726 qpair failed and we were unable to recover it. 00:30:03.726 [2024-11-06 14:11:49.873282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.726 [2024-11-06 14:11:49.873336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.726 [2024-11-06 14:11:49.873349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.726 [2024-11-06 14:11:49.873357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.726 [2024-11-06 14:11:49.873363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.726 [2024-11-06 14:11:49.873377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.726 qpair failed and we were unable to recover it. 00:30:03.726 [2024-11-06 14:11:49.883306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.726 [2024-11-06 14:11:49.883355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.726 [2024-11-06 14:11:49.883371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.726 [2024-11-06 14:11:49.883378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.726 [2024-11-06 14:11:49.883384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.726 [2024-11-06 14:11:49.883398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.726 qpair failed and we were unable to recover it. 00:30:03.726 [2024-11-06 14:11:49.893201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.726 [2024-11-06 14:11:49.893246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.726 [2024-11-06 14:11:49.893259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.726 [2024-11-06 14:11:49.893266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.726 [2024-11-06 14:11:49.893272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.726 [2024-11-06 14:11:49.893286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.726 qpair failed and we were unable to recover it. 00:30:03.726 [2024-11-06 14:11:49.903373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.726 [2024-11-06 14:11:49.903420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.726 [2024-11-06 14:11:49.903433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.726 [2024-11-06 14:11:49.903440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.726 [2024-11-06 14:11:49.903446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.726 [2024-11-06 14:11:49.903460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.726 qpair failed and we were unable to recover it. 00:30:03.726 [2024-11-06 14:11:49.913407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.726 [2024-11-06 14:11:49.913454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.726 [2024-11-06 14:11:49.913467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.726 [2024-11-06 14:11:49.913474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.726 [2024-11-06 14:11:49.913480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.726 [2024-11-06 14:11:49.913494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.726 qpair failed and we were unable to recover it. 00:30:03.726 [2024-11-06 14:11:49.923438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.726 [2024-11-06 14:11:49.923482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.726 [2024-11-06 14:11:49.923495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.726 [2024-11-06 14:11:49.923502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.726 [2024-11-06 14:11:49.923512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.726 [2024-11-06 14:11:49.923526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.726 qpair failed and we were unable to recover it. 00:30:03.726 [2024-11-06 14:11:49.933307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.726 [2024-11-06 14:11:49.933347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.726 [2024-11-06 14:11:49.933361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.726 [2024-11-06 14:11:49.933368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.726 [2024-11-06 14:11:49.933374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.726 [2024-11-06 14:11:49.933387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.726 qpair failed and we were unable to recover it. 00:30:03.726 [2024-11-06 14:11:49.943466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.726 [2024-11-06 14:11:49.943512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.726 [2024-11-06 14:11:49.943524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.727 [2024-11-06 14:11:49.943531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.727 [2024-11-06 14:11:49.943537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.727 [2024-11-06 14:11:49.943551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.727 qpair failed and we were unable to recover it. 00:30:03.727 [2024-11-06 14:11:49.953377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.727 [2024-11-06 14:11:49.953424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.727 [2024-11-06 14:11:49.953437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.727 [2024-11-06 14:11:49.953444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.727 [2024-11-06 14:11:49.953450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.727 [2024-11-06 14:11:49.953464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.727 qpair failed and we were unable to recover it. 00:30:03.727 [2024-11-06 14:11:49.963534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.727 [2024-11-06 14:11:49.963596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.727 [2024-11-06 14:11:49.963609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.727 [2024-11-06 14:11:49.963616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.727 [2024-11-06 14:11:49.963622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.727 [2024-11-06 14:11:49.963636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.727 qpair failed and we were unable to recover it. 00:30:03.727 [2024-11-06 14:11:49.973547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.727 [2024-11-06 14:11:49.973600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.727 [2024-11-06 14:11:49.973625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.727 [2024-11-06 14:11:49.973638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.727 [2024-11-06 14:11:49.973645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.727 [2024-11-06 14:11:49.973665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.727 qpair failed and we were unable to recover it. 00:30:03.727 [2024-11-06 14:11:49.983567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.727 [2024-11-06 14:11:49.983611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.727 [2024-11-06 14:11:49.983627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.727 [2024-11-06 14:11:49.983634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.727 [2024-11-06 14:11:49.983640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.727 [2024-11-06 14:11:49.983656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.727 qpair failed and we were unable to recover it. 00:30:03.727 [2024-11-06 14:11:49.993482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.727 [2024-11-06 14:11:49.993530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.727 [2024-11-06 14:11:49.993543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.727 [2024-11-06 14:11:49.993550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.727 [2024-11-06 14:11:49.993557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.727 [2024-11-06 14:11:49.993571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.727 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-06 14:11:50.003553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.989 [2024-11-06 14:11:50.003602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.989 [2024-11-06 14:11:50.003618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.989 [2024-11-06 14:11:50.003625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.989 [2024-11-06 14:11:50.003631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.989 [2024-11-06 14:11:50.003647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-06 14:11:50.013698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.989 [2024-11-06 14:11:50.013759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.989 [2024-11-06 14:11:50.013781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.989 [2024-11-06 14:11:50.013789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.989 [2024-11-06 14:11:50.013795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.989 [2024-11-06 14:11:50.013811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-06 14:11:50.023723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.989 [2024-11-06 14:11:50.023830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.989 [2024-11-06 14:11:50.023844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.989 [2024-11-06 14:11:50.023851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.989 [2024-11-06 14:11:50.023858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.989 [2024-11-06 14:11:50.023872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-06 14:11:50.033739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.989 [2024-11-06 14:11:50.033795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.989 [2024-11-06 14:11:50.033811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.989 [2024-11-06 14:11:50.033818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.989 [2024-11-06 14:11:50.033825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.989 [2024-11-06 14:11:50.033840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-06 14:11:50.043794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.989 [2024-11-06 14:11:50.043836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.989 [2024-11-06 14:11:50.043850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.989 [2024-11-06 14:11:50.043858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.989 [2024-11-06 14:11:50.043864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.989 [2024-11-06 14:11:50.043878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-06 14:11:50.053777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.989 [2024-11-06 14:11:50.053824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.989 [2024-11-06 14:11:50.053837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.989 [2024-11-06 14:11:50.053844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.989 [2024-11-06 14:11:50.053854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.989 [2024-11-06 14:11:50.053869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-06 14:11:50.063817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.989 [2024-11-06 14:11:50.063861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.989 [2024-11-06 14:11:50.063874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.989 [2024-11-06 14:11:50.063882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.989 [2024-11-06 14:11:50.063888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf2c000b90 00:30:03.989 [2024-11-06 14:11:50.063903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-06 14:11:50.073838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.989 [2024-11-06 14:11:50.073984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.989 [2024-11-06 14:11:50.074048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.989 [2024-11-06 14:11:50.074074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.989 [2024-11-06 14:11:50.074094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf20000b90 00:30:03.989 [2024-11-06 14:11:50.074148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-06 14:11:50.083852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.989 [2024-11-06 14:11:50.083912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.989 [2024-11-06 14:11:50.083941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.989 [2024-11-06 14:11:50.083957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.989 [2024-11-06 14:11:50.083971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf20000b90 00:30:03.989 [2024-11-06 14:11:50.084003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-06 14:11:50.093757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.989 [2024-11-06 14:11:50.093819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.989 [2024-11-06 14:11:50.093837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.990 [2024-11-06 14:11:50.093847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.990 [2024-11-06 14:11:50.093856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf20000b90 00:30:03.990 [2024-11-06 14:11:50.093877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-06 14:11:50.103891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.990 [2024-11-06 14:11:50.104009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.990 [2024-11-06 14:11:50.104075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.990 [2024-11-06 14:11:50.104099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.990 [2024-11-06 14:11:50.104119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ccb010 00:30:03.990 [2024-11-06 14:11:50.104172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-06 14:11:50.113955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.990 [2024-11-06 14:11:50.114045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.990 [2024-11-06 14:11:50.114088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.990 [2024-11-06 14:11:50.114109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.990 [2024-11-06 14:11:50.114128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ccb010 00:30:03.990 [2024-11-06 14:11:50.114172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-06 14:11:50.123964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.990 [2024-11-06 14:11:50.124055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.990 [2024-11-06 14:11:50.124119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.990 [2024-11-06 14:11:50.124144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.990 [2024-11-06 14:11:50.124166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf24000b90 00:30:03.990 [2024-11-06 14:11:50.124221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-06 14:11:50.133994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.990 [2024-11-06 14:11:50.134088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.990 [2024-11-06 14:11:50.134122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.990 [2024-11-06 14:11:50.134139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.990 [2024-11-06 14:11:50.134155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf24000b90 00:30:03.990 [2024-11-06 14:11:50.134189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-06 14:11:50.134358] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:03.990 A controller has encountered a failure and is being reset. 00:30:04.251 Controller properly reset. 00:30:04.251 Initializing NVMe Controllers 00:30:04.251 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:04.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:04.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:04.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:04.251 Initialization complete. Launching workers. 00:30:04.251 Starting thread on core 1 00:30:04.251 Starting thread on core 2 00:30:04.251 Starting thread on core 3 00:30:04.251 Starting thread on core 0 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:04.251 00:30:04.251 real 0m11.516s 00:30:04.251 user 0m22.189s 00:30:04.251 sys 0m3.910s 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:04.251 ************************************ 00:30:04.251 END TEST nvmf_target_disconnect_tc2 00:30:04.251 ************************************ 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:04.251 rmmod nvme_tcp 00:30:04.251 rmmod nvme_fabrics 00:30:04.251 rmmod nvme_keyring 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2601631 ']' 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2601631 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 2601631 ']' 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 2601631 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2601631 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2601631' 00:30:04.251 killing process with pid 2601631 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 2601631 00:30:04.251 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 2601631 00:30:04.512 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:04.512 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:04.512 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:04.512 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:04.512 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:04.512 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:04.512 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:04.512 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:04.512 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:04.512 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.512 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.512 14:11:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.427 14:11:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:06.427 00:30:06.427 real 0m21.966s 00:30:06.427 user 0m50.290s 00:30:06.427 sys 0m10.155s 00:30:06.427 14:11:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:06.427 14:11:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:06.427 ************************************ 00:30:06.427 END TEST nvmf_target_disconnect 00:30:06.427 ************************************ 00:30:06.687 14:11:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:06.688 00:30:06.688 real 6m36.791s 00:30:06.688 user 11m26.612s 00:30:06.688 sys 2m17.478s 00:30:06.688 14:11:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:06.688 14:11:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.688 ************************************ 00:30:06.688 END TEST nvmf_host 00:30:06.688 ************************************ 00:30:06.688 14:11:52 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:06.688 14:11:52 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:06.688 14:11:52 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:06.688 14:11:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:06.688 14:11:52 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:06.688 14:11:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:06.688 ************************************ 00:30:06.688 START TEST nvmf_target_core_interrupt_mode 00:30:06.688 ************************************ 00:30:06.688 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:06.688 * Looking for test storage... 00:30:06.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:06.688 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:06.688 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:30:06.688 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:06.948 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:06.948 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.948 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.948 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.948 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.948 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.948 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.948 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.948 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.948 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.948 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.948 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.948 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:06.948 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:06.948 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.948 14:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.948 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:06.948 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:06.948 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.948 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:06.948 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.948 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:06.948 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:06.948 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.948 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:06.948 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.948 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:06.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.949 --rc genhtml_branch_coverage=1 00:30:06.949 --rc genhtml_function_coverage=1 00:30:06.949 --rc genhtml_legend=1 00:30:06.949 --rc geninfo_all_blocks=1 00:30:06.949 --rc geninfo_unexecuted_blocks=1 00:30:06.949 00:30:06.949 ' 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:06.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.949 --rc genhtml_branch_coverage=1 00:30:06.949 --rc genhtml_function_coverage=1 00:30:06.949 --rc genhtml_legend=1 00:30:06.949 --rc geninfo_all_blocks=1 00:30:06.949 --rc geninfo_unexecuted_blocks=1 00:30:06.949 00:30:06.949 ' 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:06.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.949 --rc genhtml_branch_coverage=1 00:30:06.949 --rc genhtml_function_coverage=1 00:30:06.949 --rc genhtml_legend=1 00:30:06.949 --rc geninfo_all_blocks=1 00:30:06.949 --rc geninfo_unexecuted_blocks=1 00:30:06.949 00:30:06.949 ' 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:06.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.949 --rc genhtml_branch_coverage=1 00:30:06.949 --rc genhtml_function_coverage=1 00:30:06.949 --rc genhtml_legend=1 00:30:06.949 --rc geninfo_all_blocks=1 00:30:06.949 --rc geninfo_unexecuted_blocks=1 00:30:06.949 00:30:06.949 ' 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:06.949 ************************************ 00:30:06.949 START TEST nvmf_abort 00:30:06.949 ************************************ 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:06.949 * Looking for test storage... 00:30:06.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:30:06.949 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:07.209 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:07.209 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:07.209 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:07.209 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:07.209 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:07.209 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:07.209 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:07.209 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:07.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.210 --rc genhtml_branch_coverage=1 00:30:07.210 --rc genhtml_function_coverage=1 00:30:07.210 --rc genhtml_legend=1 00:30:07.210 --rc geninfo_all_blocks=1 00:30:07.210 --rc geninfo_unexecuted_blocks=1 00:30:07.210 00:30:07.210 ' 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:07.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.210 --rc genhtml_branch_coverage=1 00:30:07.210 --rc genhtml_function_coverage=1 00:30:07.210 --rc genhtml_legend=1 00:30:07.210 --rc geninfo_all_blocks=1 00:30:07.210 --rc geninfo_unexecuted_blocks=1 00:30:07.210 00:30:07.210 ' 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:07.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.210 --rc genhtml_branch_coverage=1 00:30:07.210 --rc genhtml_function_coverage=1 00:30:07.210 --rc genhtml_legend=1 00:30:07.210 --rc geninfo_all_blocks=1 00:30:07.210 --rc geninfo_unexecuted_blocks=1 00:30:07.210 00:30:07.210 ' 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:07.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.210 --rc genhtml_branch_coverage=1 00:30:07.210 --rc genhtml_function_coverage=1 00:30:07.210 --rc genhtml_legend=1 00:30:07.210 --rc geninfo_all_blocks=1 00:30:07.210 --rc geninfo_unexecuted_blocks=1 00:30:07.210 00:30:07.210 ' 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:07.210 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.211 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.211 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.211 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:07.211 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:07.211 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:07.211 14:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.344 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:15.344 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:15.344 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:15.344 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:15.344 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:15.344 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:15.345 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:15.345 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:15.345 Found net devices under 0000:31:00.0: cvl_0_0 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:15.345 Found net devices under 0000:31:00.1: cvl_0_1 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:15.345 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:15.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:15.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:30:15.346 00:30:15.346 --- 10.0.0.2 ping statistics --- 00:30:15.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.346 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:15.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:15.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:30:15.346 00:30:15.346 --- 10.0.0.1 ping statistics --- 00:30:15.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.346 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2607271 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2607271 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 2607271 ']' 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:15.346 14:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.346 [2024-11-06 14:12:01.030011] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:15.346 [2024-11-06 14:12:01.031169] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:30:15.346 [2024-11-06 14:12:01.031220] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:15.346 [2024-11-06 14:12:01.133481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:15.346 [2024-11-06 14:12:01.184777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:15.346 [2024-11-06 14:12:01.184826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:15.346 [2024-11-06 14:12:01.184834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:15.346 [2024-11-06 14:12:01.184841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:15.346 [2024-11-06 14:12:01.184848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:15.346 [2024-11-06 14:12:01.186673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:15.346 [2024-11-06 14:12:01.186807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.346 [2024-11-06 14:12:01.186808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:15.346 [2024-11-06 14:12:01.265530] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:15.346 [2024-11-06 14:12:01.266650] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:15.346 [2024-11-06 14:12:01.267043] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:15.346 [2024-11-06 14:12:01.267216] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:15.607 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:15.607 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:30:15.607 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:15.607 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:15.607 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.869 [2024-11-06 14:12:01.895968] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.869 Malloc0 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.869 Delay0 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.869 14:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.869 [2024-11-06 14:12:01.999947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.869 14:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.869 14:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:15.869 14:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.869 14:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.869 14:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.869 14:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:15.869 [2024-11-06 14:12:02.103431] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:18.416 Initializing NVMe Controllers 00:30:18.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:18.416 controller IO queue size 128 less than required 00:30:18.416 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:18.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:18.416 Initialization complete. Launching workers. 00:30:18.416 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28133 00:30:18.416 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28190, failed to submit 66 00:30:18.416 success 28133, unsuccessful 57, failed 0 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:18.416 rmmod nvme_tcp 00:30:18.416 rmmod nvme_fabrics 00:30:18.416 rmmod nvme_keyring 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2607271 ']' 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2607271 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 2607271 ']' 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 2607271 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2607271 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2607271' 00:30:18.416 killing process with pid 2607271 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 2607271 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 2607271 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.416 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.341 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:20.341 00:30:20.341 real 0m13.476s 00:30:20.341 user 0m10.790s 00:30:20.341 sys 0m6.967s 00:30:20.341 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:20.341 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:20.341 ************************************ 00:30:20.341 END TEST nvmf_abort 00:30:20.341 ************************************ 00:30:20.341 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:20.341 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:20.341 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:20.341 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:20.603 ************************************ 00:30:20.603 START TEST nvmf_ns_hotplug_stress 00:30:20.603 ************************************ 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:20.603 * Looking for test storage... 00:30:20.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:20.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.603 --rc genhtml_branch_coverage=1 00:30:20.603 --rc genhtml_function_coverage=1 00:30:20.603 --rc genhtml_legend=1 00:30:20.603 --rc geninfo_all_blocks=1 00:30:20.603 --rc geninfo_unexecuted_blocks=1 00:30:20.603 00:30:20.603 ' 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:20.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.603 --rc genhtml_branch_coverage=1 00:30:20.603 --rc genhtml_function_coverage=1 00:30:20.603 --rc genhtml_legend=1 00:30:20.603 --rc geninfo_all_blocks=1 00:30:20.603 --rc geninfo_unexecuted_blocks=1 00:30:20.603 00:30:20.603 ' 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:20.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.603 --rc genhtml_branch_coverage=1 00:30:20.603 --rc genhtml_function_coverage=1 00:30:20.603 --rc genhtml_legend=1 00:30:20.603 --rc geninfo_all_blocks=1 00:30:20.603 --rc geninfo_unexecuted_blocks=1 00:30:20.603 00:30:20.603 ' 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:20.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.603 --rc genhtml_branch_coverage=1 00:30:20.603 --rc genhtml_function_coverage=1 00:30:20.603 --rc genhtml_legend=1 00:30:20.603 --rc geninfo_all_blocks=1 00:30:20.603 --rc geninfo_unexecuted_blocks=1 00:30:20.603 00:30:20.603 ' 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:20.603 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:20.865 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:20.865 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:20.865 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:20.865 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:20.865 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:20.866 14:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:29.004 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:29.004 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:29.004 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:29.004 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:29.004 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:29.005 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:29.005 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:29.005 Found net devices under 0000:31:00.0: cvl_0_0 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:29.005 Found net devices under 0000:31:00.1: cvl_0_1 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:29.005 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:29.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:29.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:30:29.006 00:30:29.006 --- 10.0.0.2 ping statistics --- 00:30:29.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.006 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:29.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:29.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:30:29.006 00:30:29.006 --- 10.0.0.1 ping statistics --- 00:30:29.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.006 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2612506 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2612506 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 2612506 ']' 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:29.006 14:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:29.006 [2024-11-06 14:12:14.618231] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:29.006 [2024-11-06 14:12:14.619405] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:30:29.006 [2024-11-06 14:12:14.619457] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:29.006 [2024-11-06 14:12:14.721404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:29.006 [2024-11-06 14:12:14.773615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:29.006 [2024-11-06 14:12:14.773666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:29.006 [2024-11-06 14:12:14.773675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:29.006 [2024-11-06 14:12:14.773683] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:29.006 [2024-11-06 14:12:14.773689] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:29.006 [2024-11-06 14:12:14.775578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:29.006 [2024-11-06 14:12:14.775741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.006 [2024-11-06 14:12:14.775741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:29.006 [2024-11-06 14:12:14.854466] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:29.006 [2024-11-06 14:12:14.855555] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:29.006 [2024-11-06 14:12:14.856033] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:29.006 [2024-11-06 14:12:14.856199] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:29.266 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:29.266 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:30:29.267 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:29.267 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:29.267 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:29.267 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:29.267 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:29.267 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:29.527 [2024-11-06 14:12:15.636717] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.527 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:29.788 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:29.788 [2024-11-06 14:12:16.009555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.788 14:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:30.049 14:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:30.310 Malloc0 00:30:30.310 14:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:30.310 Delay0 00:30:30.571 14:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.572 14:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:30.833 NULL1 00:30:30.833 14:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:31.094 14:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2613120 00:30:31.094 14:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:31.094 14:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.094 14:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:32.037 Read completed with error (sct=0, sc=11) 00:30:32.299 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:32.299 14:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.299 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:32.299 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:32.299 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:32.299 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:32.299 14:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:32.299 14:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:32.560 true 00:30:32.560 14:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:32.560 14:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.504 14:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.504 14:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:33.504 14:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:33.764 true 00:30:33.764 14:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:33.764 14:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.025 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.025 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:34.025 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:34.285 true 00:30:34.285 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:34.285 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.669 14:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.669 14:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:35.669 14:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:35.929 true 00:30:35.929 14:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:35.929 14:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.871 14:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:36.871 14:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:36.871 14:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:37.132 true 00:30:37.132 14:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:37.132 14:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.132 14:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.393 14:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:37.393 14:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:37.653 true 00:30:37.653 14:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:37.653 14:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.855 14:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.855 14:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:38.855 14:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:39.116 true 00:30:39.116 14:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:39.116 14:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:40.059 14:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:40.059 14:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:40.059 14:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:40.319 true 00:30:40.319 14:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:40.319 14:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.579 14:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.579 14:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:40.579 14:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:40.840 true 00:30:40.840 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:40.840 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.104 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.104 [2024-11-06 14:12:27.362903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.362954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.362984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.363992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.364972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.365000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.104 [2024-11-06 14:12:27.365031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.365987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.366730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.367988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.368018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.368048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.368079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.368109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.368137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.368166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.368198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.368226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.368252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.368282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.368313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.368346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.368379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.368409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.368438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.368465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.105 [2024-11-06 14:12:27.368494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.368521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.368548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.368575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.368607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.368638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.368661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.368690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.368715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.368750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.368777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.368803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.368832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.368864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.368893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.368921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.368948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.368978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.369625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.370972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.106 [2024-11-06 14:12:27.371941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.371968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.372984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.373990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.374973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.375004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.375031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.375060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.375087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.375125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.375154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.375186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.375213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.375249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.375276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.375303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.375330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.375359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.107 [2024-11-06 14:12:27.375388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.375417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.375451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.375479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.375514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.375543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.375574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.375605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.375634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.375661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.375703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.375730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.375764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.375795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.375925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.375960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.375998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.376995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.377978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.378005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.378032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.378060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.378087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.378112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.378137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.378161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.378191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.378222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.378250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.378279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.378307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.378332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.108 [2024-11-06 14:12:27.378360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.109 [2024-11-06 14:12:27.378390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.109 [2024-11-06 14:12:27.378422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.109 [2024-11-06 14:12:27.378454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.109 [2024-11-06 14:12:27.378483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.109 [2024-11-06 14:12:27.378509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.109 [2024-11-06 14:12:27.378535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.109 [2024-11-06 14:12:27.378558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.109 [2024-11-06 14:12:27.378586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.109 [2024-11-06 14:12:27.378615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.109 [2024-11-06 14:12:27.378641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.109 [2024-11-06 14:12:27.378865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.109 [2024-11-06 14:12:27.378901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.109 [2024-11-06 14:12:27.378930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.109 [2024-11-06 14:12:27.378964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.109 [2024-11-06 14:12:27.378995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.109 [2024-11-06 14:12:27.379037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.109 [2024-11-06 14:12:27.379066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.399 [2024-11-06 14:12:27.379886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.379915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.379946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.379976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.380708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:41.400 [2024-11-06 14:12:27.381097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.381994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.382942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.383531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.383565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.383593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.383625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.400 [2024-11-06 14:12:27.383655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.383696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.383730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.383773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.383806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.383847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.383881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.383923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.383954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.383977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.384988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.385978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.401 [2024-11-06 14:12:27.386950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.386979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.387991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.388993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.389981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.390007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.390034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.390061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.390090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.402 [2024-11-06 14:12:27.390122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.390152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.390179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.390209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.390240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.390268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.390601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.390633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.390663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.390694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.390722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.390756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.390787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.390816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.390845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.390894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.390923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.390962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.390989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.391972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.392979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.403 [2024-11-06 14:12:27.393689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.393719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.393749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.393793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.393823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.393856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.393882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.393914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.393943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.393972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.394844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.395979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:41.404 [2024-11-06 14:12:27.396066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:41.404 [2024-11-06 14:12:27.396451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.404 [2024-11-06 14:12:27.396596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.396625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.396651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.396681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.396712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.396741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.396772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.396800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.396830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.396861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.396896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.396926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.396963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.396990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.397977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.398995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.399032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.399070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.399100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.399129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.399153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.399185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.399775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.399809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.399844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.399871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.399900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.399928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.399960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.399989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.400019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.400048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.400076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.400106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.400137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.400165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.400199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.400228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.405 [2024-11-06 14:12:27.400263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.400984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.401632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.402990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.406 [2024-11-06 14:12:27.403775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.403809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.403842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.403865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.403894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.403922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.403951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.403980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.404993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.405977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.406875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.407077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.407110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.407140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.407167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.407197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.407227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.407 [2024-11-06 14:12:27.407258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.407982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.408913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.409989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.410019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.410049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.410076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.410104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.410133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.410160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.408 [2024-11-06 14:12:27.410188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.410987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.411015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.411045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.411076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.411105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.411135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.411166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.411199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.411782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.411814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.411843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.411873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.411905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.411935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.411964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.411994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.412986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.409 [2024-11-06 14:12:27.413742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.413779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.413804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.413832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.413860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.413890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.413923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.413952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.413980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.414999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.415994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:41.410 [2024-11-06 14:12:27.416228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.416971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.417000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.417030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.410 [2024-11-06 14:12:27.417059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.417882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.418011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.418041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.418068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.418096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.418123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.418160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.418193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.418230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.418260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.418291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.418318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.418345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.418374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.418408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.418437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.418464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.418988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.419987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.411 [2024-11-06 14:12:27.420559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.420592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.420622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.420654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.420683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.420712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.420740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.420775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.420805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.420838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.421985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.422896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.423974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.424004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.424033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.412 [2024-11-06 14:12:27.424062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.424986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.425021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.425063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.425100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.425139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.425173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.425203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.425798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.425830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.425860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.425890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.425919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.425951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.425978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.426994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.427029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.427060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.427090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.427120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.427149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.427178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.427202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.427230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.427260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.427288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.413 [2024-11-06 14:12:27.427319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.427350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.427380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.427407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.427439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.427468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.427500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.427528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.427557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.427584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.427610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.427644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.427683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.427833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.427864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.427894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.427925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.427956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.427985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.428995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.429808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.414 [2024-11-06 14:12:27.430785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.430814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.430845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.430873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.430902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.430929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.430958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.430987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.431997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.432977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.433976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.434004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.434038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.434065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.434094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.434126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.434155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.434186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.434213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.415 [2024-11-06 14:12:27.434243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.434270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.434298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.434333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.434361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.434394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.434737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.434771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.434801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.434835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.434874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.434908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.434943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.434980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.435974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.436657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.416 [2024-11-06 14:12:27.437913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.437938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.437966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.437993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.438940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.439981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.417 [2024-11-06 14:12:27.440836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.440865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.440895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.440923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.440951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.440979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.441984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.442989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.443975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.444002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.444031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.444063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.444092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.444118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.444144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.444173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.444203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.444232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.444260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.444298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.444326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.444355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.444383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.444412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.444442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.418 [2024-11-06 14:12:27.444476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.444503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.444532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.444559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.444588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.444620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.444647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.444678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.444708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.444740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.444773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.444801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.444825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.444851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.444878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.444904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.444927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.444954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.444983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.445992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.446983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.419 [2024-11-06 14:12:27.447900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.447925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.447958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.448518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.448549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.448583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.448611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.448646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.448674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.448705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.448733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.448768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.448800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.448858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.448886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.448921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.448951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.448982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.449971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.450976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 Message suppressed 999 times: [2024-11-06 14:12:27.451668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 Read completed with error (sct=0, sc=15) 00:30:41.420 [2024-11-06 14:12:27.451698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.420 [2024-11-06 14:12:27.451876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.451909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.451937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.451967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.451997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.452979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.453995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.454981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.455017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.455050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.455087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.455116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.455152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.455180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.421 [2024-11-06 14:12:27.455207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.455236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.455264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.455291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.455325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.455352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.455381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.455415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.455444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.455474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.455504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.455533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.455941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.455972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.456986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.457829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.458185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.458221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.458252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.458280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.458310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.458343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.458380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.458409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.458443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.458474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.458508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.458539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.422 [2024-11-06 14:12:27.458572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.458603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.458634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.458664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.458694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.458723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.458756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.458783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.458812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.458840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.458869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.458897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.458924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.458957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.458986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.459979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.460008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.460037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.460066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.460095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.460655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.460686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.460717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.460753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.460779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.460810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.460840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.460870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.460899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.460928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.460956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.460984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.461969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.462000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.462030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.462057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.423 [2024-11-06 14:12:27.462087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.462980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.463999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.464977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.465006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.465038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.465067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.465117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.465147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.465187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.465216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.465246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.465277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.465316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.465346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.465385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.465415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.465621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.465651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.465677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.424 [2024-11-06 14:12:27.465706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.465741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.465772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.465798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.465827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.465858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.465892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.465919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.465947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.465976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.466984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.467997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.468977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.469009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.425 [2024-11-06 14:12:27.469040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.469742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.470963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.426 [2024-11-06 14:12:27.471982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.472986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.473993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.474993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.427 [2024-11-06 14:12:27.475642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.475674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.475702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.475730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.475762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.475790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.475818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.475846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.475873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.475903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.475933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.475961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.475989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.476018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.476051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.476091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.476126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.476160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.476195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.476221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.476251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.476274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.476307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.476335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.476364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.476391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.476418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.476446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.476475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.477990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.478989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.479019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.479064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.479094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.479226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.479255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.479292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.479322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.479358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.479388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.428 [2024-11-06 14:12:27.479418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.479446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.479477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.479505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.479536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.479564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.479596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.479626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.479660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.479688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.479979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.480995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.481887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.429 [2024-11-06 14:12:27.482811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.482842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.482876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.482902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.482932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.482959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.482989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.483857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.484997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.485990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.430 [2024-11-06 14:12:27.486021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.486057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.486085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.486122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.486150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.486714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.486743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.486776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:41.431 [2024-11-06 14:12:27.486809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.486840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.486871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.486901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.486933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.486961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.486991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.487999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.488988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.489018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.489048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.489080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.489109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.489152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.489182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.489237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.489518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.489555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.489584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.489625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.489654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.431 [2024-11-06 14:12:27.489688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.489717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.489749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.489781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.489816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.489844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.489902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.489928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.489959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.489985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.490978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.491984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.432 [2024-11-06 14:12:27.492907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.492933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.492961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.492985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.493984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.494975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.495798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.496207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.496240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.496271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.496300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.496327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.496354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.496390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.496421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.496453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.496483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.496516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.433 [2024-11-06 14:12:27.496544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.496578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.496609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.496637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.496663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.496695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.496728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.496764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.496796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.496826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.496855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.496885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.496913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.496945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.496970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.497994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.498975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.434 [2024-11-06 14:12:27.499853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.499882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.499911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.499940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.499970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.500003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.500035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.500068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.500096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.500127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.500157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.500185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.500214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.500243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.500274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.500302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.500336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.500931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.500964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.500993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.501974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.502981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.503011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.503040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.503067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.503092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.435 [2024-11-06 14:12:27.503125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.503966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.504921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.505979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.506009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.506047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.506075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.506105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.506136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.506165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.506193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.506222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.506253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.506282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.506315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.506343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.506377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.506402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.506439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.506472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.436 [2024-11-06 14:12:27.506501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.506528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.506556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.506593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.506630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.506658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.506685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.506713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.506743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.506790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.506820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.506849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.506879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.506905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.506930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.506963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.506996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.507850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.508976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.509982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.510015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.510046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.437 [2024-11-06 14:12:27.510082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.510984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.511978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.512979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.438 [2024-11-06 14:12:27.513624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.513652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.513681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.513709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.513739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.513776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.513806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.513833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.513864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.513895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.513925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.513951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.513981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.514732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.515987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.439 [2024-11-06 14:12:27.516828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.516852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.516876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.516900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.516930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.517984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.518976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.519982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.520009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.520039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.520068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.520099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.520130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.520158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.520198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.520229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.520258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.520287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.440 [2024-11-06 14:12:27.520315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.520998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.521911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:41.441 [2024-11-06 14:12:27.522514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.522975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.523990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.524020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.524050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.524083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.524111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.524169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.441 [2024-11-06 14:12:27.524199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.524979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.525989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.526980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.442 [2024-11-06 14:12:27.527921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.527957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.527985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.528824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.529972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.530984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.531934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.532010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.532044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.532075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.532104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.532134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.532161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.532193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.532221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.532250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.532278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.532306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.532335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.443 [2024-11-06 14:12:27.532364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.532985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.533970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.534981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.535992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.536980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.537985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.538976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.539004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.539033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.444 [2024-11-06 14:12:27.539065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.539977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.540977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.541986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.542987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.543978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.544993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.545021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.545062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.545491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.545529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.545557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.545585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.545614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.545642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.545673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.545703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.545731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.545765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.545793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.545821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.545848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.545878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.545905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.445 [2024-11-06 14:12:27.545934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.545965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.545992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.546973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.547972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.548990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.549976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.550975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.446 [2024-11-06 14:12:27.551975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.552980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.553992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 true 00:30:41.447 [2024-11-06 14:12:27.554086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.554998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.555984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.556837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.557997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.447 [2024-11-06 14:12:27.558601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.558629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.558653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.558683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.558712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.558742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.558783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.558818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.558853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.558888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.558928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.558966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.559001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.559032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.559065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.559096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.559124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.559259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:41.448 [2024-11-06 14:12:27.559735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.559780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.559809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.559839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.559867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.559898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.559924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.559955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.559981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.560973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.561976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.562987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.563722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.564995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.565024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.448 [2024-11-06 14:12:27.565054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.565985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.566637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.567970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.568967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.569987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.570670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.449 [2024-11-06 14:12:27.571779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.571807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.571833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.571874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.571907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.571941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.571971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.571998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.572975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.573129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.573161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.573216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.573245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.573299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.573327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.573375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.573403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.573442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.573470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.573498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.573525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.573551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.573577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.574975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.575983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.576993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.577972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.450 [2024-11-06 14:12:27.578850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.578883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.578913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.578940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.578970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.578999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.579967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.580944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:41.451 [2024-11-06 14:12:27.580979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.451 [2024-11-06 14:12:27.581363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.581976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.582892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.583973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.451 [2024-11-06 14:12:27.584982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.585989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.586793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.587970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.588978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.589003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.589031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.589059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.589180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.589211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.589238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.589266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.589299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.589335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.589375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.589405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.589439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.589474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.589508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.589544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.589572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.589601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.590997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.452 [2024-11-06 14:12:27.591799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.591830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.591868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.591905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.591940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.591974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.592984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.593695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.594996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.595969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.596007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.596034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.596070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.596203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.596233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.596264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.596293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:41.453 [2024-11-06 14:12:27.596328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.596359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.596388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.596417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.596450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.596480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.596508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.596536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.596567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.596597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.597986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.598015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.598051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.598077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.598110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.598136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.598168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.598195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.598227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.453 [2024-11-06 14:12:27.598256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.598972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.599980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.600988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.601991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.602988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.603023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.603062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.603096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.603129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.603157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.603181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.603214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.603244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.603278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.603315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.603346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.603377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.603405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.603890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.603920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.603947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.603982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.454 [2024-11-06 14:12:27.604940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.604969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.605990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.606989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.607888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.608978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.609981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.610012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.610051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.610079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.610112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.610233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.610267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.610727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.610762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.610789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.610817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.610846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.610878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.610916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.610955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.610985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.611015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.611042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.611067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.611097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.611127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.611154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.611183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.611212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.611242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.455 [2024-11-06 14:12:27.611279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.611995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.612989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.613980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.614865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.615995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.616978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.456 [2024-11-06 14:12:27.617725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.618993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.619978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.620999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.621982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.622999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.623995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.624018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.624043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.624066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.624095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.624124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.624156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.457 [2024-11-06 14:12:27.624188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.624997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.625958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.626989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.627979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.628728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.629980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.630004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.630028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.630051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.630075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.458 [2024-11-06 14:12:27.630103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.630135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.630167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.630196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.630224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.630252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.630282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.630309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.630689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.630722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.630757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.630787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.630816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.630847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.630877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.630903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.630931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:41.459 [2024-11-06 14:12:27.630958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.630985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.631984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.632993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.633022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.633055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.633090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.633123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.633602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.633633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.633658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.633685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.633712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.633742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.633776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.633806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.633842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.633877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.633914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.633948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.633979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.634969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.635948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.459 [2024-11-06 14:12:27.636886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.636916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.636969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.637642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.638998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.639883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.640986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.641979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.642993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.643021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.643048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.643079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.643107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.643142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.643172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.643200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.643229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.460 [2024-11-06 14:12:27.643260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.643955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.644012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.644043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.644072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.644101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.644132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.644163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.644191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.644217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.644249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.644283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.644312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.644339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.644366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.644403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.644430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.644458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.645981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.646970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.647996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.648987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.649017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.649397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.649433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.461 [2024-11-06 14:12:27.649464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.649496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.649525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.649554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.649589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.649626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.649654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.649685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.649717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.649751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.649779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.649805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.649835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.649868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.649902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.649929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.649961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.649991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.650994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.651833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.652056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.652091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.652120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.652149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.652177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.652209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.652239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.652265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.652294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.652328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.751 [2024-11-06 14:12:27.652356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.652991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.653987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.654971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.655984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.656010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.656038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.656067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.656102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.656140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.656173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.656201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.656231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.656257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.656285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.656308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.656338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.656372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.656401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.656916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.656947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.656976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.657985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.658996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.659023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.659050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.659078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.752 [2024-11-06 14:12:27.659106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.659970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.660813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.661984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.662947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.663990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.753 [2024-11-06 14:12:27.664806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.664833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.664862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.664895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.664934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.664965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.664992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.665976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:41.754 [2024-11-06 14:12:27.666081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.666986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.667021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.667055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.667095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.667131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.667165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.667204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.667244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.667273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.667301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.667333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.667364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.667396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.667424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.667450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.667477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.667507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.667537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.667565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.668991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.669968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.670001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.670033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.670666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.670697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.670729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.670764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.670793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.670827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.670856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.670889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.670918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.670945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.670973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.671994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.672023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.672052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.672082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.754 [2024-11-06 14:12:27.672114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.672998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.673981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.674701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.675975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.676976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.677986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.755 [2024-11-06 14:12:27.678639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.678668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.678698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.678728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.678761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.678790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.678816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.678875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.678904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.678955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.678987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.679976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.680976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.681605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.682975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.683984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.684994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.685022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.685058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.685089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.685116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.685150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.756 [2024-11-06 14:12:27.685180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.685983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.686978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.687989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.688681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.689998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.690989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.691994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.692025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.692054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.692082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.757 [2024-11-06 14:12:27.692112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.692968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.693997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.694999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.695636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.696972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.697860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.698220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.698254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.698280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.698310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.698336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.698367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.698401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.698430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.698459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.698487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.758 [2024-11-06 14:12:27.698516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.698545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.698580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.698609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.698638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.698668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.698697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.698727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.698764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.698828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.698858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.698887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.698916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.698948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.698977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.699989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.700964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.701977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.702531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:41.759 [2024-11-06 14:12:27.703107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.703981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.704977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.705009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.705037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.705070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.705211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.705243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.705281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.705319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.705350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.705376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.705408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.705440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.759 [2024-11-06 14:12:27.705473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.705506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.705536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.705563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.705594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.705624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.705652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.705682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.705712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.705740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.705777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.705805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.705836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.705864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.705895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.705929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.705957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.705996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.706973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.707987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.708969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.709005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.709033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.709060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.709085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.709114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.709139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.709162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.709186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.709211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.709234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.709258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.709281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.709306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.709329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.710982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.760 [2024-11-06 14:12:27.711944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.711974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.712982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.713994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.714994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.715972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.716975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.717998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.761 [2024-11-06 14:12:27.718767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.718798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.718829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.718858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.718888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.719996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.720996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.721988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.722972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.723001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.723029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.723061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.723091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.723119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.723152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.723192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.723231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.723259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.723288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.723318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.723346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.723376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.723400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.723431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.723459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.723489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.724977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.762 [2024-11-06 14:12:27.725887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.725915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.725947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.725981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.726993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.727986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.728997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.729993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.730992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.731991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.732021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.732052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.732089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.732124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.732159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.763 [2024-11-06 14:12:27.732192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.732796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.733980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.734978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.735982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.736988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.737016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.737044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.737075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.737103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.737132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.737160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.737187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.737215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.737242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.737274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.737300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.737328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.737357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.737974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.738991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.739022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.739051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.739082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.739111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.739138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.764 [2024-11-06 14:12:27.739166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.739989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:41.765 [2024-11-06 14:12:27.740184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.740974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.741809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.765 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.765 [2024-11-06 14:12:27.932619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.932669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.932700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.932728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.932780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.932808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.932837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.932865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.932894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.932923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.932953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.932981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.933986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.934016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.934045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.934075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.934115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.934140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.934168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.934197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.934225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.934251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.934277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.934304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.934330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.934356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.934383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.934407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.934432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.934461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.765 [2024-11-06 14:12:27.934499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.934535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.934574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.934706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.934736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.934770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.934796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.934820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.934848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.934878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.934905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.934932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.934968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.935991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.936974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.937889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.938972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.939989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.940024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.940049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.940078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.940106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.766 [2024-11-06 14:12:27.940133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.940978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.941972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.942984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.943987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.944980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.945991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.946019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.946045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.946071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.946097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.946121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.946150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.946180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.946211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.946244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.946285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.946321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.946361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.946396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.946432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.946464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.946499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.946537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.767 [2024-11-06 14:12:27.946567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.946590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.946966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.947996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.948866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.949972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.950985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.951013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.951042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.951069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.951099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.951130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.951602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.951640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.951668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.951695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.951723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.951757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.951788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.951822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.951851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.951885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.951912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.951945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.951972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.952000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.952027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.952054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.952082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.952113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.952145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.952172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.952203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.952230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.952259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.768 [2024-11-06 14:12:27.952290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.952988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:41.769 [2024-11-06 14:12:27.953338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:41.769 [2024-11-06 14:12:27.953945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.953977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.954985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.955712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.956973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.957004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.957031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.957059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.957088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.957119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.957142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.957171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.957200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.957227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.957254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.957280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.769 [2024-11-06 14:12:27.957306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.957998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.958970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.959978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.960994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.961994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.962773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.963442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.963471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.963494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.963528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.963559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.963584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.963614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.963642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.963671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.963699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.770 [2024-11-06 14:12:27.963729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.963763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.963791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.963821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.963849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.963877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.963904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.963933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.963962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.963993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.964993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:41.771 [2024-11-06 14:12:27.965720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.965976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.966991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.967020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.967047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.967078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.967106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.967135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.967164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.967192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.967229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.967349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.967379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.967407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.967872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.967900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.967926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.967952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.967979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.771 [2024-11-06 14:12:27.968660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.968688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.968718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.968754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.968781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.968809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.968836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.968865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.968896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.968923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.968960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.968990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.969981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.970991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.971817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.972985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.973990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.974989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.975021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.975049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.975081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.975110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.975137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.975170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.975200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.975231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.975259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.975290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.975320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.975347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.772 [2024-11-06 14:12:27.975379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.975987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.976972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.977990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.978991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.979990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.980884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.981249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.981279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.981308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.981336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.981363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.981390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.981420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.981447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.773 [2024-11-06 14:12:27.981480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.981509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.981541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.981572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.981601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.981629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.981656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.981687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.981716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.981749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.981781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.981815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.981843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.981881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.981913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.981945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.981973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.982988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.983016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.983042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.983067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.983098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.983127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.983256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.983290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.983324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.983355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.983388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.983415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.983439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.983469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.983500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.983531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.983562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.983590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.983620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.984975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.985995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.986981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.774 [2024-11-06 14:12:27.987928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.987962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.987993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.988025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.988055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.988113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.988144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.988605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.988635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.988666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.988695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.988724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.988757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.988785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.988814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.988841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.988868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.988892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.988926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.988952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.988978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.989980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.990998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.775 [2024-11-06 14:12:27.991865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.991892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.991920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.991953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.991984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.992984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.993989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.776 [2024-11-06 14:12:27.994686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.994714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.994761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.994792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.994820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.994850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.994883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.994914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.994952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.994980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.995999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.996973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.997976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.998005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.998034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.998066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.998095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.998125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.777 [2024-11-06 14:12:27.998155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.998980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:27.999972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:41.778 [2024-11-06 14:12:28.000651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.000680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.000709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.000742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.000778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.000821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.000853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.000889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.000918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.000947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.000976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.001932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:42.063 [2024-11-06 14:12:28.002373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.002990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.063 [2024-11-06 14:12:28.003665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.003694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.003717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.003753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.003783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.003809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.003841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.003878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.003914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.003948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.003982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.004767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.005986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.006984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.007011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.007045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.007073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.007105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.007135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.007164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.007196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.064 [2024-11-06 14:12:28.007223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.007998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.008964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.009983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.065 [2024-11-06 14:12:28.010819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.010846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.010876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.010906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.010936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.010967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.010996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.011981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.012976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.013965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.014000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.014030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.014061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.014090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.014122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.014152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.014181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.014211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.066 [2024-11-06 14:12:28.014240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.014977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.015980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.016973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.017003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.017038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.017066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.017091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.017120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.017149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.017180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.017209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.017237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.017265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.017291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.067 [2024-11-06 14:12:28.017317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.017976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.018999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.019984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.068 [2024-11-06 14:12:28.020846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.020875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.020902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.020931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.020963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.020993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.021986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.022808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.023985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.024013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.024048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.024077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.024106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.024136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.024164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.024195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.024225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.024256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.024288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.024339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.069 [2024-11-06 14:12:28.024369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.024985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.025828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.026987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.027985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.028013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.070 [2024-11-06 14:12:28.028040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.028990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.029974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.030995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.071 [2024-11-06 14:12:28.031714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.031753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.031783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.031812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.031841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.031867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.031894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.031924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.031953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.031982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.032972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.033979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.034003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.034026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.034050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.034074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.034097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.034120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.034143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.034167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.034803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.034834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.034863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.034892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.034921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.034954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.034983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.072 [2024-11-06 14:12:28.035013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.035879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.036991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.037990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.038019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.038145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.038174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.038205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.038233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.073 [2024-11-06 14:12:28.038263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:42.074 [2024-11-06 14:12:28.038291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.038906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.039989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.040973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.074 [2024-11-06 14:12:28.041906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.041936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.041966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.041995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.042974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.043997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.044991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.045016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.045039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.045062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.045085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.045108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.045130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.075 [2024-11-06 14:12:28.045152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.045174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.045197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.045220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.045244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.045618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.045649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.045677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.045709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.045739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.045781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.045811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.045848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.045878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.045909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.045937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.045967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.045996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.046973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.047993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.048452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.048485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.048520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.048554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.048588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.048621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.048656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.048691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.048716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.076 [2024-11-06 14:12:28.048749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.048779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.048813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.048841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.048869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.048896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.048922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.048952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.048980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.049975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.050986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.051990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.052023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.052052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.077 [2024-11-06 14:12:28.052085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.052985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.053986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.054869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.055000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.055032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.055059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.055088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.055120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.055149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.055179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.055207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.055238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.055266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.055294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.055325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.055353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.055388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.055416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.055447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.078 [2024-11-06 14:12:28.055477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.055521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.055549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.055581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.055608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.055645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.055674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.055730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.056993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.057986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.058976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.059005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.059034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.059069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.059101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.079 [2024-11-06 14:12:28.059133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.059163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.059193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.059224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.059256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.059285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.059315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.059346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.059373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.059402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.059431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.059461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.059856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.059894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.059921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.059956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.059988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.060982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.061977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.062006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.062041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.062072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.062099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.062130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.062170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.062199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.062227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.062259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.062288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.062316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.062347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.062378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.062407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.080 [2024-11-06 14:12:28.062439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.062469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.062511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.062541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.062573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.062601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.062631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.063976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.081 [2024-11-06 14:12:28.064919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.064948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.065991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.066019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.066049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.066081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.066137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.066167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.066201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.066232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.066262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.066292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.066692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.066726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.066763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.066791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.066826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.066855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.066890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.066921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.066979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.067996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.068030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.068063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.068095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.068123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.068148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.068176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.068203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.068236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.082 [2024-11-06 14:12:28.068267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.068996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.069025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.069057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.069086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.069116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.069147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.069181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.069214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.069248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.069278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.069307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.069337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.069365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.069397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.069426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.069475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.069502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.069561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.069591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.070998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.083 [2024-11-06 14:12:28.071928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.071957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.071985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.072974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.073003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.073032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.073060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.073088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.073117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.073144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.073170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.073200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.073237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.073265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.073297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.073765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.073798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.073825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.073856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.073886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:42.084 [2024-11-06 14:12:28.073935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.073967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.073998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.074990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.075023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.075048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.075080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.075109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.075138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.075168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.075199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.075231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.075261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.075289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.075318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.075344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.075373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.075407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.084 [2024-11-06 14:12:28.075435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.075466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.075494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.075522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.075549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.075576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.075608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.075637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.075672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.075819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.075855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.075885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.075914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.075945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.075973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.076985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.077967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.085 [2024-11-06 14:12:28.078597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.078626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.078654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.078681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.078710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.078737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.078775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.078805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.078835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.078859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.078889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.078920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.079965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.080989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.081984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.082014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.082041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.082070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.082100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.086 [2024-11-06 14:12:28.082131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.082991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.083017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.083049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.083086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.083119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.083151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.083179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.083206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.083237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.083267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.083294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.083326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.083359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.083383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.083413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.083443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.083473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.083502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.084990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.085864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.086122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.086155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.087 [2024-11-06 14:12:28.086186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.086992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.087990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.088031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.088066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.088096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.088224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.088253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.088283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.088307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.088335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.088364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.088392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.088849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.088890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.088922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.088951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.088984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.088 [2024-11-06 14:12:28.089762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.089793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.089821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.089849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.089879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.089909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.089942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.089970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.090995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.091978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.092992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.093475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.093508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.089 [2024-11-06 14:12:28.093538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.093566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.093598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.093628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.093661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.093692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.093722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.093755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.093783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.093813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.093844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.093877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.093907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.093945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.093974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.094988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.095994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.096022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.096053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.096082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.096111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.096140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.096168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.096198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.096226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.096253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.096278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.096311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.096341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.090 [2024-11-06 14:12:28.096370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.096993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.097979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.098972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.099975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.100007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.100038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.100068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.091 [2024-11-06 14:12:28.100098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.100976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.101973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.102969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.103091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.103121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.103151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.103182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.103291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.103321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.103350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.103388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.103426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.103454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.103483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.103512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.103544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.103576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.103605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.092 [2024-11-06 14:12:28.103633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.103662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.103690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.103717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.103751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.103783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.103809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.103843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.103869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.103893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.103925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.103957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.103984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.104981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.105976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.106808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.107558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.107594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.107624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.107655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.107685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.107715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.093 [2024-11-06 14:12:28.107756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.107788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.107820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.107851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.107880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.107909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.107938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.107966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.107996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.108968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.109975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:42.094 [2024-11-06 14:12:28.110059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.094 [2024-11-06 14:12:28.110623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.110654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.110685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.110714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.110741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.110771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.110798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.110823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.110853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.110883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.110911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.110938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.110967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.110993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.111988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.112991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.113999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.114030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.095 [2024-11-06 14:12:28.114066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.114984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.115980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.116995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.096 [2024-11-06 14:12:28.117628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.117667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.117701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.117732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.117770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.117796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.117829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.117859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.117889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 true 00:30:42.097 [2024-11-06 14:12:28.117916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.117953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.117982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.118921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.119978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.120983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.121013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.121041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.097 [2024-11-06 14:12:28.121073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.121979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.122886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.123995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.124022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.124065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.124093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.124134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.124161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.124188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.124219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.124248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.124277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.124306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.124336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.124365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.124392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.098 [2024-11-06 14:12:28.124423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.124452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.124485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.124516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.124543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.124573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.124602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.124631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.124662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.124689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.124720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.124754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.124793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.124828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.124857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.124885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.124915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.124945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.124972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.125973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.126986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.099 [2024-11-06 14:12:28.127695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.127726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.127758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.128965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.129863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.130996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.100 [2024-11-06 14:12:28.131599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.131632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.131659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.131682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.131710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.131738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.131772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.131802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.131829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.131861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.131892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.131922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.131954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.131984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.132120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.132149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.132178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.132205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.132234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.132264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.132292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.132323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.132350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.132814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.132844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.132874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.132904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.132932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.132962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.132993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.133978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.134988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.135012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.135039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.135074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.135374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.135405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.135437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.135466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.135497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.135529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.135555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.135581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.135612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.135640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.135673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.135696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.135728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.101 [2024-11-06 14:12:28.135765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.135794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.135821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.135853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.135884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.135913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.135944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.135973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.136975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.137983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.138996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.139025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.139055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.139085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.139113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.139148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.102 [2024-11-06 14:12:28.139177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.139319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.139346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.139378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.139406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.139437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.139467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.139498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.139962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.139995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.140972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.141980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.103 [2024-11-06 14:12:28.142989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.143964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.144972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.145983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.146012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.146042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 14:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:42.104 [2024-11-06 14:12:28.146068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.146098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.146129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.146157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.146183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.146211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.146242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.146273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.146301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.146333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.146367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.146397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.146427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 14:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.104 [2024-11-06 14:12:28.146455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.104 [2024-11-06 14:12:28.146486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.146515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.146551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.146923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.146953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.146983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:42.105 [2024-11-06 14:12:28.147100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.147989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.148975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.149979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.105 [2024-11-06 14:12:28.150786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.150817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.150850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.150879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.150913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.150940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.150969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.150999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.151875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.152973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.106 [2024-11-06 14:12:28.153579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.153609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.153643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.153673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.153701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.153730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.153766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.153810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.153840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.153892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.153920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.153955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.153984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.154976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.155975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.156003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.156033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.156057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.156086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.156117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.156147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.156173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.156203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.107 [2024-11-06 14:12:28.156233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.156262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.156653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.156684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.156713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.156762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.156792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.156831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.156859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.156920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.156949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.156982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.157993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.158985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.108 [2024-11-06 14:12:28.159915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.159940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.159967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.159992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.160895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.161993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.162952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.163012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.163042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.163092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.163122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.163170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.163200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.163237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.163268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.163304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.163333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.163473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.163503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.163540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.109 [2024-11-06 14:12:28.163570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.163604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.163634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.163683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.163786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.163819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.163855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.163883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.163917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.163948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.163984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.164989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.165959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.166998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.167020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.167045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.110 [2024-11-06 14:12:28.167068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.167979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.168994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.169994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.170024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.170386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.170417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.170445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.170477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.170507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.170537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.170565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.170592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.170621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.170649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.170679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.111 [2024-11-06 14:12:28.170705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.112 [2024-11-06 14:12:28.170735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.112 [2024-11-06 14:12:28.170768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.112 [2024-11-06 14:12:28.170794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.112 [2024-11-06 14:12:28.170829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.112 [2024-11-06 14:12:28.170857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.112 [2024-11-06 14:12:28.170890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.112 [2024-11-06 14:12:28.170919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.112 [2024-11-06 14:12:28.170947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.112 [2024-11-06 14:12:28.170978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:42.112 [2024-11-06 14:12:28.171009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:43.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.053 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.314 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:43.314 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:43.314 true 00:30:43.596 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:43.596 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.165 14:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.425 14:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:44.425 14:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:44.685 true 00:30:44.685 14:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:44.685 14:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.945 14:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.945 14:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:44.945 14:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:45.206 true 00:30:45.206 14:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:45.206 14:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:46.406 14:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:46.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:46.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:46.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:46.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:46.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:46.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:46.406 14:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:46.406 14:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:46.666 true 00:30:46.666 14:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:46.666 14:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:47.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:47.608 14:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:47.608 14:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:47.608 14:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:47.868 true 00:30:47.868 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:47.868 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.128 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.128 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:48.128 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:48.388 true 00:30:48.388 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:48.388 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.648 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.909 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:48.909 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:48.909 true 00:30:48.909 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:48.909 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.202 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.507 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:49.507 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:49.507 true 00:30:49.507 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:49.507 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:50.892 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:50.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:50.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:50.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:50.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:50.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:50.892 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:50.892 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:50.892 true 00:30:50.892 14:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:50.892 14:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.833 14:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.093 14:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:52.093 14:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:52.093 true 00:30:52.093 14:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:52.093 14:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.353 14:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.614 14:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:52.614 14:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:52.614 true 00:30:52.874 14:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:52.874 14:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:53.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:53.814 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.075 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:54.075 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:54.335 true 00:30:54.335 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:54.335 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:55.276 14:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:55.276 14:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:55.276 14:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:55.537 true 00:30:55.537 14:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:55.537 14:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.797 14:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.797 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:55.797 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:56.058 true 00:30:56.058 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:56.058 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.318 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.318 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:56.318 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:56.579 true 00:30:56.579 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:56.579 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.840 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.100 14:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:57.100 14:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:57.100 true 00:30:57.100 14:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:57.100 14:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.485 14:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.485 14:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:58.485 14:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:58.746 true 00:30:58.746 14:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:58.746 14:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:59.690 14:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:59.690 14:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:59.690 14:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:59.951 true 00:30:59.952 14:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:30:59.952 14:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.952 14:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.213 14:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:00.213 14:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:00.474 true 00:31:00.474 14:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:31:00.474 14:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.474 14:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.736 14:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:31:00.736 14:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:31:00.996 true 00:31:00.996 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:31:00.996 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.258 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.258 Initializing NVMe Controllers 00:31:01.258 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.258 Controller IO queue size 128, less than required. 00:31:01.258 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:01.258 Controller IO queue size 128, less than required. 00:31:01.258 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:01.258 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:01.258 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:01.258 Initialization complete. Launching workers. 00:31:01.258 ======================================================== 00:31:01.258 Latency(us) 00:31:01.258 Device Information : IOPS MiB/s Average min max 00:31:01.258 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3270.22 1.60 25213.24 1573.65 1085555.42 00:31:01.258 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18564.59 9.06 6895.09 1118.37 401696.86 00:31:01.258 ======================================================== 00:31:01.258 Total : 21834.81 10.66 9638.61 1118.37 1085555.42 00:31:01.258 00:31:01.258 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:31:01.258 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:31:01.518 true 00:31:01.518 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2613120 00:31:01.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2613120) - No such process 00:31:01.518 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2613120 00:31:01.518 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.779 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:01.779 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:01.779 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:01.779 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:01.779 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:01.779 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:02.039 null0 00:31:02.039 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:02.039 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:02.039 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:02.039 null1 00:31:02.300 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:02.300 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:02.300 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:02.300 null2 00:31:02.300 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:02.300 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:02.300 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:02.560 null3 00:31:02.560 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:02.561 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:02.561 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:02.561 null4 00:31:02.561 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:02.561 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:02.561 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:02.822 null5 00:31:02.822 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:02.822 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:02.822 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:03.083 null6 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:03.083 null7 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:03.083 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2619193 2619196 2619199 2619203 2619204 2619207 2619210 2619213 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.084 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:03.345 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.345 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:03.345 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:03.345 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:03.345 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:03.345 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:03.345 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:03.345 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:03.345 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.345 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.346 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.605 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:03.865 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:03.865 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:03.865 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:03.865 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.865 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.865 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:03.865 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.865 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.865 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:03.865 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.865 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.865 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:03.865 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.865 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.865 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:03.865 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.865 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.865 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:03.865 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.865 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.865 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:03.865 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.865 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.865 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:03.865 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.865 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.865 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.127 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:04.387 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:04.649 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.912 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:04.912 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:04.912 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:04.912 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:04.912 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:04.912 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.912 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.912 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:04.912 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:04.912 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.912 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.912 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:04.912 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.912 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.912 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:04.912 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.912 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.912 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:04.912 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.912 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.912 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.174 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:05.435 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:05.696 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.958 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.958 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:05.958 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:05.958 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:05.958 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:06.221 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.221 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.221 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:06.221 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.221 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.221 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:06.221 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.221 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:06.221 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:06.221 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.221 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.221 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:06.221 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.221 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.221 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:06.221 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:06.221 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.482 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:06.483 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:06.744 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:06.744 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:06.744 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:06.744 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.744 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.744 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:06.744 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:06.744 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.744 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.744 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.744 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.744 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.744 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.744 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.744 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.744 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:07.005 rmmod nvme_tcp 00:31:07.005 rmmod nvme_fabrics 00:31:07.005 rmmod nvme_keyring 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2612506 ']' 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2612506 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 2612506 ']' 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 2612506 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2612506 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2612506' 00:31:07.005 killing process with pid 2612506 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 2612506 00:31:07.005 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 2612506 00:31:07.266 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:07.266 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:07.266 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:07.266 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:07.266 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:07.266 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:07.266 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:07.266 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:07.266 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:07.266 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.266 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.267 14:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:09.814 00:31:09.814 real 0m48.808s 00:31:09.814 user 2m57.746s 00:31:09.814 sys 0m21.057s 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:09.814 ************************************ 00:31:09.814 END TEST nvmf_ns_hotplug_stress 00:31:09.814 ************************************ 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:09.814 ************************************ 00:31:09.814 START TEST nvmf_delete_subsystem 00:31:09.814 ************************************ 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:09.814 * Looking for test storage... 00:31:09.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:09.814 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:09.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.815 --rc genhtml_branch_coverage=1 00:31:09.815 --rc genhtml_function_coverage=1 00:31:09.815 --rc genhtml_legend=1 00:31:09.815 --rc geninfo_all_blocks=1 00:31:09.815 --rc geninfo_unexecuted_blocks=1 00:31:09.815 00:31:09.815 ' 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:09.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.815 --rc genhtml_branch_coverage=1 00:31:09.815 --rc genhtml_function_coverage=1 00:31:09.815 --rc genhtml_legend=1 00:31:09.815 --rc geninfo_all_blocks=1 00:31:09.815 --rc geninfo_unexecuted_blocks=1 00:31:09.815 00:31:09.815 ' 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:09.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.815 --rc genhtml_branch_coverage=1 00:31:09.815 --rc genhtml_function_coverage=1 00:31:09.815 --rc genhtml_legend=1 00:31:09.815 --rc geninfo_all_blocks=1 00:31:09.815 --rc geninfo_unexecuted_blocks=1 00:31:09.815 00:31:09.815 ' 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:09.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.815 --rc genhtml_branch_coverage=1 00:31:09.815 --rc genhtml_function_coverage=1 00:31:09.815 --rc genhtml_legend=1 00:31:09.815 --rc geninfo_all_blocks=1 00:31:09.815 --rc geninfo_unexecuted_blocks=1 00:31:09.815 00:31:09.815 ' 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:09.815 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:17.960 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:17.960 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:17.960 Found net devices under 0000:31:00.0: cvl_0_0 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:17.960 Found net devices under 0000:31:00.1: cvl_0_1 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:17.960 14:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:17.960 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:17.960 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:17.960 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:17.960 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:17.960 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:17.960 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:17.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:17.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:31:17.961 00:31:17.961 --- 10.0.0.2 ping statistics --- 00:31:17.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.961 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:17.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:17.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:31:17.961 00:31:17.961 --- 10.0.0.1 ping statistics --- 00:31:17.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.961 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2624220 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2624220 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 2624220 ']' 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:17.961 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.961 [2024-11-06 14:13:03.422942] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:17.961 [2024-11-06 14:13:03.424117] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:31:17.961 [2024-11-06 14:13:03.424167] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.961 [2024-11-06 14:13:03.525305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:17.961 [2024-11-06 14:13:03.577591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:17.961 [2024-11-06 14:13:03.577640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:17.961 [2024-11-06 14:13:03.577648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:17.961 [2024-11-06 14:13:03.577656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:17.961 [2024-11-06 14:13:03.577662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:17.961 [2024-11-06 14:13:03.579324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.961 [2024-11-06 14:13:03.579329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.961 [2024-11-06 14:13:03.657738] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:17.961 [2024-11-06 14:13:03.658296] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:17.961 [2024-11-06 14:13:03.658624] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:17.961 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:17.961 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:31:17.961 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.222 [2024-11-06 14:13:04.284362] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.222 [2024-11-06 14:13:04.316796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.222 NULL1 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.222 Delay0 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2624516 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:18.222 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:18.222 [2024-11-06 14:13:04.438616] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:20.138 14:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:20.138 14:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.138 14:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 starting I/O failed: -6 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 Write completed with error (sct=0, sc=8) 00:31:20.399 starting I/O failed: -6 00:31:20.399 Write completed with error (sct=0, sc=8) 00:31:20.399 Write completed with error (sct=0, sc=8) 00:31:20.399 Write completed with error (sct=0, sc=8) 00:31:20.399 Write completed with error (sct=0, sc=8) 00:31:20.399 starting I/O failed: -6 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 starting I/O failed: -6 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 starting I/O failed: -6 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 Write completed with error (sct=0, sc=8) 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 starting I/O failed: -6 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 Write completed with error (sct=0, sc=8) 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 starting I/O failed: -6 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 Write completed with error (sct=0, sc=8) 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 starting I/O failed: -6 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 Read completed with error (sct=0, sc=8) 00:31:20.399 Write completed with error (sct=0, sc=8) 00:31:20.399 Write completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 [2024-11-06 14:13:06.571214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124f0e0 is same with the state(6) to be set 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 Write completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 Read completed with error (sct=0, sc=8) 00:31:20.400 starting I/O failed: -6 00:31:20.400 starting I/O failed: -6 00:31:20.400 starting I/O failed: -6 00:31:20.400 starting I/O failed: -6 00:31:20.400 starting I/O failed: -6 00:31:20.400 starting I/O failed: -6 00:31:20.400 starting I/O failed: -6 00:31:20.400 starting I/O failed: -6 00:31:20.400 starting I/O failed: -6 00:31:20.400 starting I/O failed: -6 00:31:20.400 starting I/O failed: -6 00:31:21.344 [2024-11-06 14:13:07.538518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12505e0 is same with the state(6) to be set 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 [2024-11-06 14:13:07.574923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124f2c0 is same with the state(6) to be set 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 [2024-11-06 14:13:07.575118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124ef00 is same with the state(6) to be set 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 [2024-11-06 14:13:07.576995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f214c00d020 is same with the state(6) to be set 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Write completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.344 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Write completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Write completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Write completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Write completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 Write completed with error (sct=0, sc=8) 00:31:21.345 Read completed with error (sct=0, sc=8) 00:31:21.345 [2024-11-06 14:13:07.577950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f214c00d7e0 is same with the state(6) to be set 00:31:21.345 Initializing NVMe Controllers 00:31:21.345 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:21.345 Controller IO queue size 128, less than required. 00:31:21.345 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:21.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:21.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:21.345 Initialization complete. Launching workers. 00:31:21.345 ======================================================== 00:31:21.345 Latency(us) 00:31:21.345 Device Information : IOPS MiB/s Average min max 00:31:21.345 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.18 0.08 907560.14 406.86 1008334.18 00:31:21.345 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 183.09 0.09 917598.61 421.92 1012102.79 00:31:21.345 ======================================================== 00:31:21.345 Total : 348.27 0.17 912837.51 406.86 1012102.79 00:31:21.345 00:31:21.345 [2024-11-06 14:13:07.578712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12505e0 (9): Bad file descriptor 00:31:21.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:21.345 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.345 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:21.345 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2624516 00:31:21.345 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2624516 00:31:21.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2624516) - No such process 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2624516 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2624516 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2624516 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:21.917 [2024-11-06 14:13:08.112641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2625190 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:21.917 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2625190 00:31:21.918 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:21.918 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:22.179 [2024-11-06 14:13:08.212162] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:22.440 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:22.440 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2625190 00:31:22.440 14:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:23.011 14:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:23.011 14:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2625190 00:31:23.011 14:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:23.582 14:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:23.582 14:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2625190 00:31:23.582 14:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:24.153 14:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:24.153 14:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2625190 00:31:24.153 14:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:24.414 14:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:24.414 14:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2625190 00:31:24.414 14:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:24.985 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:24.985 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2625190 00:31:24.985 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:25.245 Initializing NVMe Controllers 00:31:25.245 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:25.245 Controller IO queue size 128, less than required. 00:31:25.245 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:25.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:25.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:25.245 Initialization complete. Launching workers. 00:31:25.245 ======================================================== 00:31:25.245 Latency(us) 00:31:25.245 Device Information : IOPS MiB/s Average min max 00:31:25.245 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002660.45 1000189.14 1040829.89 00:31:25.245 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004217.07 1000253.38 1011261.48 00:31:25.245 ======================================================== 00:31:25.245 Total : 256.00 0.12 1003438.76 1000189.14 1040829.89 00:31:25.245 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2625190 00:31:25.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2625190) - No such process 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2625190 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:25.506 rmmod nvme_tcp 00:31:25.506 rmmod nvme_fabrics 00:31:25.506 rmmod nvme_keyring 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2624220 ']' 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2624220 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 2624220 ']' 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 2624220 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:25.506 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2624220 00:31:25.767 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:25.767 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:25.767 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2624220' 00:31:25.767 killing process with pid 2624220 00:31:25.767 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 2624220 00:31:25.767 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 2624220 00:31:25.767 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:25.767 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:25.767 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:25.767 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:25.767 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:25.767 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:25.767 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:25.767 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:25.767 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:25.767 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.767 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.767 14:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.312 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:28.312 00:31:28.312 real 0m18.429s 00:31:28.312 user 0m26.691s 00:31:28.312 sys 0m7.341s 00:31:28.312 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:28.312 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:28.313 ************************************ 00:31:28.313 END TEST nvmf_delete_subsystem 00:31:28.313 ************************************ 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:28.313 ************************************ 00:31:28.313 START TEST nvmf_host_management 00:31:28.313 ************************************ 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:28.313 * Looking for test storage... 00:31:28.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:28.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.313 --rc genhtml_branch_coverage=1 00:31:28.313 --rc genhtml_function_coverage=1 00:31:28.313 --rc genhtml_legend=1 00:31:28.313 --rc geninfo_all_blocks=1 00:31:28.313 --rc geninfo_unexecuted_blocks=1 00:31:28.313 00:31:28.313 ' 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:28.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.313 --rc genhtml_branch_coverage=1 00:31:28.313 --rc genhtml_function_coverage=1 00:31:28.313 --rc genhtml_legend=1 00:31:28.313 --rc geninfo_all_blocks=1 00:31:28.313 --rc geninfo_unexecuted_blocks=1 00:31:28.313 00:31:28.313 ' 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:28.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.313 --rc genhtml_branch_coverage=1 00:31:28.313 --rc genhtml_function_coverage=1 00:31:28.313 --rc genhtml_legend=1 00:31:28.313 --rc geninfo_all_blocks=1 00:31:28.313 --rc geninfo_unexecuted_blocks=1 00:31:28.313 00:31:28.313 ' 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:28.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.313 --rc genhtml_branch_coverage=1 00:31:28.313 --rc genhtml_function_coverage=1 00:31:28.313 --rc genhtml_legend=1 00:31:28.313 --rc geninfo_all_blocks=1 00:31:28.313 --rc geninfo_unexecuted_blocks=1 00:31:28.313 00:31:28.313 ' 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.313 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:28.314 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:36.453 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:36.453 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.453 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:36.454 Found net devices under 0000:31:00.0: cvl_0_0 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:36.454 Found net devices under 0000:31:00.1: cvl_0_1 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:36.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:36.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:31:36.454 00:31:36.454 --- 10.0.0.2 ping statistics --- 00:31:36.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.454 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:36.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:36.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:31:36.454 00:31:36.454 --- 10.0.0.1 ping statistics --- 00:31:36.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.454 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2630028 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2630028 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2630028 ']' 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:36.454 14:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.454 [2024-11-06 14:13:21.992492] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:36.454 [2024-11-06 14:13:21.993650] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:31:36.454 [2024-11-06 14:13:21.993699] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.454 [2024-11-06 14:13:22.100004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:36.454 [2024-11-06 14:13:22.153858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.454 [2024-11-06 14:13:22.153905] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:36.454 [2024-11-06 14:13:22.153914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:36.454 [2024-11-06 14:13:22.153921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.454 [2024-11-06 14:13:22.153927] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.454 [2024-11-06 14:13:22.155931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:36.454 [2024-11-06 14:13:22.156205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:36.454 [2024-11-06 14:13:22.156382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:36.454 [2024-11-06 14:13:22.156386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.454 [2024-11-06 14:13:22.236269] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:36.454 [2024-11-06 14:13:22.237563] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:36.454 [2024-11-06 14:13:22.237586] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:36.454 [2024-11-06 14:13:22.237944] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:36.454 [2024-11-06 14:13:22.238003] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.716 [2024-11-06 14:13:22.861369] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.716 Malloc0 00:31:36.716 [2024-11-06 14:13:22.965582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:36.716 14:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.978 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2630268 00:31:36.978 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2630268 /var/tmp/bdevperf.sock 00:31:36.978 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2630268 ']' 00:31:36.978 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:36.978 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:36.978 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:36.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:36.978 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:36.978 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:36.978 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:36.978 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.978 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:36.978 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:36.978 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:36.978 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:36.978 { 00:31:36.978 "params": { 00:31:36.978 "name": "Nvme$subsystem", 00:31:36.978 "trtype": "$TEST_TRANSPORT", 00:31:36.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.978 "adrfam": "ipv4", 00:31:36.978 "trsvcid": "$NVMF_PORT", 00:31:36.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.978 "hdgst": ${hdgst:-false}, 00:31:36.978 "ddgst": ${ddgst:-false} 00:31:36.978 }, 00:31:36.978 "method": "bdev_nvme_attach_controller" 00:31:36.978 } 00:31:36.978 EOF 00:31:36.978 )") 00:31:36.978 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:36.978 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:36.978 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:36.978 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:36.978 "params": { 00:31:36.978 "name": "Nvme0", 00:31:36.978 "trtype": "tcp", 00:31:36.978 "traddr": "10.0.0.2", 00:31:36.978 "adrfam": "ipv4", 00:31:36.978 "trsvcid": "4420", 00:31:36.978 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:36.978 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:36.978 "hdgst": false, 00:31:36.978 "ddgst": false 00:31:36.978 }, 00:31:36.978 "method": "bdev_nvme_attach_controller" 00:31:36.978 }' 00:31:36.978 [2024-11-06 14:13:23.074243] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:31:36.978 [2024-11-06 14:13:23.074315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630268 ] 00:31:36.978 [2024-11-06 14:13:23.168131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.978 [2024-11-06 14:13:23.221245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.239 Running I/O for 10 seconds... 00:31:37.812 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:37.812 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:37.812 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:37.812 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.812 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:37.812 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.812 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.813 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:37.813 [2024-11-06 14:13:23.989031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cda80 is same with the state(6) to be set 00:31:37.813 [2024-11-06 14:13:23.989096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cda80 is same with the state(6) to be set 00:31:37.813 [2024-11-06 14:13:23.989390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.813 [2024-11-06 14:13:23.989959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.813 [2024-11-06 14:13:23.989969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.989976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.989985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.989996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.814 [2024-11-06 14:13:23.990591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.814 [2024-11-06 14:13:23.990600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1515c60 is same with the state(6) to be set 00:31:37.814 [2024-11-06 14:13:23.991928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:37.814 task offset: 89856 on job bdev=Nvme0n1 fails 00:31:37.814 00:31:37.814 Latency(us) 00:31:37.814 [2024-11-06T13:13:24.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.814 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:37.814 Job: Nvme0n1 ended in about 0.48 seconds with error 00:31:37.814 Verification LBA range: start 0x0 length 0x400 00:31:37.814 Nvme0n1 : 0.48 1333.36 83.34 133.34 0.00 42441.34 1966.08 37573.97 00:31:37.814 [2024-11-06T13:13:24.095Z] =================================================================================================================== 00:31:37.815 [2024-11-06T13:13:24.095Z] Total : 1333.36 83.34 133.34 0.00 42441.34 1966.08 37573.97 00:31:37.815 [2024-11-06 14:13:23.994197] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:37.815 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.815 [2024-11-06 14:13:23.994238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1505280 (9): Bad file descriptor 00:31:37.815 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:37.815 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.815 14:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:37.815 [2024-11-06 14:13:23.995685] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:31:37.815 [2024-11-06 14:13:23.995791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:37.815 [2024-11-06 14:13:23.995834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.815 [2024-11-06 14:13:23.995850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:31:37.815 [2024-11-06 14:13:23.995859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:31:37.815 [2024-11-06 14:13:23.995868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.815 [2024-11-06 14:13:23.995877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1505280 00:31:37.815 [2024-11-06 14:13:23.995906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1505280 (9): Bad file descriptor 00:31:37.815 [2024-11-06 14:13:23.995926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:37.815 [2024-11-06 14:13:23.995934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:37.815 [2024-11-06 14:13:23.995944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:37.815 [2024-11-06 14:13:23.995955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:37.815 14:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.815 14:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:38.757 14:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2630268 00:31:38.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2630268) - No such process 00:31:38.757 14:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:38.758 14:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:38.758 14:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:38.758 14:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:38.758 14:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:38.758 14:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:38.758 14:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:38.758 14:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:38.758 { 00:31:38.758 "params": { 00:31:38.758 "name": "Nvme$subsystem", 00:31:38.758 "trtype": "$TEST_TRANSPORT", 00:31:38.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:38.758 "adrfam": "ipv4", 00:31:38.758 "trsvcid": "$NVMF_PORT", 00:31:38.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:38.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:38.758 "hdgst": ${hdgst:-false}, 00:31:38.758 "ddgst": ${ddgst:-false} 00:31:38.758 }, 00:31:38.758 "method": "bdev_nvme_attach_controller" 00:31:38.758 } 00:31:38.758 EOF 00:31:38.758 )") 00:31:38.758 14:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:38.758 14:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:38.758 14:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:38.758 14:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:38.758 "params": { 00:31:38.758 "name": "Nvme0", 00:31:38.758 "trtype": "tcp", 00:31:38.758 "traddr": "10.0.0.2", 00:31:38.758 "adrfam": "ipv4", 00:31:38.758 "trsvcid": "4420", 00:31:38.758 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:38.758 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:38.758 "hdgst": false, 00:31:38.758 "ddgst": false 00:31:38.758 }, 00:31:38.758 "method": "bdev_nvme_attach_controller" 00:31:38.758 }' 00:31:39.019 [2024-11-06 14:13:25.070127] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:31:39.019 [2024-11-06 14:13:25.070202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630622 ] 00:31:39.019 [2024-11-06 14:13:25.163738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.019 [2024-11-06 14:13:25.216888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.590 Running I/O for 1 seconds... 00:31:40.532 1936.00 IOPS, 121.00 MiB/s 00:31:40.532 Latency(us) 00:31:40.532 [2024-11-06T13:13:26.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.532 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:40.532 Verification LBA range: start 0x0 length 0x400 00:31:40.532 Nvme0n1 : 1.02 1987.80 124.24 0.00 0.00 31518.95 1720.32 33641.81 00:31:40.532 [2024-11-06T13:13:26.812Z] =================================================================================================================== 00:31:40.532 [2024-11-06T13:13:26.812Z] Total : 1987.80 124.24 0.00 0.00 31518.95 1720.32 33641.81 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:40.532 rmmod nvme_tcp 00:31:40.532 rmmod nvme_fabrics 00:31:40.532 rmmod nvme_keyring 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2630028 ']' 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2630028 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 2630028 ']' 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 2630028 00:31:40.532 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2630028 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2630028' 00:31:40.793 killing process with pid 2630028 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 2630028 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 2630028 00:31:40.793 [2024-11-06 14:13:26.966736] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.793 14:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:43.503 00:31:43.503 real 0m15.009s 00:31:43.503 user 0m20.422s 00:31:43.503 sys 0m7.469s 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:43.503 ************************************ 00:31:43.503 END TEST nvmf_host_management 00:31:43.503 ************************************ 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:43.503 ************************************ 00:31:43.503 START TEST nvmf_lvol 00:31:43.503 ************************************ 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:43.503 * Looking for test storage... 00:31:43.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:43.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.503 --rc genhtml_branch_coverage=1 00:31:43.503 --rc genhtml_function_coverage=1 00:31:43.503 --rc genhtml_legend=1 00:31:43.503 --rc geninfo_all_blocks=1 00:31:43.503 --rc geninfo_unexecuted_blocks=1 00:31:43.503 00:31:43.503 ' 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:43.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.503 --rc genhtml_branch_coverage=1 00:31:43.503 --rc genhtml_function_coverage=1 00:31:43.503 --rc genhtml_legend=1 00:31:43.503 --rc geninfo_all_blocks=1 00:31:43.503 --rc geninfo_unexecuted_blocks=1 00:31:43.503 00:31:43.503 ' 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:43.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.503 --rc genhtml_branch_coverage=1 00:31:43.503 --rc genhtml_function_coverage=1 00:31:43.503 --rc genhtml_legend=1 00:31:43.503 --rc geninfo_all_blocks=1 00:31:43.503 --rc geninfo_unexecuted_blocks=1 00:31:43.503 00:31:43.503 ' 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:43.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.503 --rc genhtml_branch_coverage=1 00:31:43.503 --rc genhtml_function_coverage=1 00:31:43.503 --rc genhtml_legend=1 00:31:43.503 --rc geninfo_all_blocks=1 00:31:43.503 --rc geninfo_unexecuted_blocks=1 00:31:43.503 00:31:43.503 ' 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:43.503 14:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:51.642 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:51.642 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:51.642 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:51.643 Found net devices under 0000:31:00.0: cvl_0_0 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:51.643 Found net devices under 0000:31:00.1: cvl_0_1 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:51.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:51.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:31:51.643 00:31:51.643 --- 10.0.0.2 ping statistics --- 00:31:51.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.643 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:51.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:51.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:31:51.643 00:31:51.643 --- 10.0.0.1 ping statistics --- 00:31:51.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.643 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2635267 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2635267 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 2635267 ']' 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:51.643 14:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:51.643 [2024-11-06 14:13:37.030112] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:51.643 [2024-11-06 14:13:37.031279] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:31:51.643 [2024-11-06 14:13:37.031332] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:51.643 [2024-11-06 14:13:37.132921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:51.643 [2024-11-06 14:13:37.184742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:51.643 [2024-11-06 14:13:37.184800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:51.643 [2024-11-06 14:13:37.184809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:51.643 [2024-11-06 14:13:37.184817] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:51.643 [2024-11-06 14:13:37.184823] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:51.643 [2024-11-06 14:13:37.186867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.643 [2024-11-06 14:13:37.187033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:51.643 [2024-11-06 14:13:37.187034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.643 [2024-11-06 14:13:37.265081] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:51.643 [2024-11-06 14:13:37.266061] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:51.643 [2024-11-06 14:13:37.266934] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:51.643 [2024-11-06 14:13:37.267033] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:51.644 14:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:51.644 14:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:31:51.644 14:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:51.644 14:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:51.644 14:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:51.644 14:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:51.644 14:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:51.904 [2024-11-06 14:13:38.052097] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:51.904 14:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:52.165 14:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:52.165 14:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:52.426 14:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:52.426 14:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:52.687 14:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:52.687 14:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=22b7c7c3-ffd5-4719-b27c-e1f9f9128c58 00:31:52.687 14:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 22b7c7c3-ffd5-4719-b27c-e1f9f9128c58 lvol 20 00:31:52.948 14:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f3947371-1183-433c-912c-f43e8420708d 00:31:52.948 14:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:53.209 14:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f3947371-1183-433c-912c-f43e8420708d 00:31:53.209 14:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:53.469 [2024-11-06 14:13:39.607973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:53.469 14:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:53.730 14:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2635698 00:31:53.730 14:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:53.730 14:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:54.673 14:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f3947371-1183-433c-912c-f43e8420708d MY_SNAPSHOT 00:31:54.933 14:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9ce3c4bf-65c1-48a6-b22a-c575a573f7ac 00:31:54.933 14:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f3947371-1183-433c-912c-f43e8420708d 30 00:31:55.195 14:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9ce3c4bf-65c1-48a6-b22a-c575a573f7ac MY_CLONE 00:31:55.456 14:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f35d405f-0692-486d-88c6-0a5cd7959de7 00:31:55.456 14:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f35d405f-0692-486d-88c6-0a5cd7959de7 00:31:56.027 14:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2635698 00:32:04.166 Initializing NVMe Controllers 00:32:04.166 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:04.166 Controller IO queue size 128, less than required. 00:32:04.166 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:04.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:04.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:04.166 Initialization complete. Launching workers. 00:32:04.166 ======================================================== 00:32:04.166 Latency(us) 00:32:04.166 Device Information : IOPS MiB/s Average min max 00:32:04.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15216.40 59.44 8412.48 1824.17 68942.83 00:32:04.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15429.20 60.27 8296.21 4045.97 72086.95 00:32:04.166 ======================================================== 00:32:04.166 Total : 30645.60 119.71 8353.94 1824.17 72086.95 00:32:04.166 00:32:04.166 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:04.166 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f3947371-1183-433c-912c-f43e8420708d 00:32:04.427 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 22b7c7c3-ffd5-4719-b27c-e1f9f9128c58 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:04.688 rmmod nvme_tcp 00:32:04.688 rmmod nvme_fabrics 00:32:04.688 rmmod nvme_keyring 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2635267 ']' 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2635267 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 2635267 ']' 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 2635267 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2635267 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2635267' 00:32:04.688 killing process with pid 2635267 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 2635267 00:32:04.688 14:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 2635267 00:32:04.950 14:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:04.950 14:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:04.950 14:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:04.950 14:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:04.950 14:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:04.950 14:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:04.950 14:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:04.950 14:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:04.950 14:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:04.950 14:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.950 14:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:04.950 14:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.865 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:06.865 00:32:06.865 real 0m23.951s 00:32:06.865 user 0m55.761s 00:32:06.865 sys 0m10.976s 00:32:06.865 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:06.865 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:06.865 ************************************ 00:32:06.865 END TEST nvmf_lvol 00:32:06.865 ************************************ 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:07.127 ************************************ 00:32:07.127 START TEST nvmf_lvs_grow 00:32:07.127 ************************************ 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:07.127 * Looking for test storage... 00:32:07.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:07.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.127 --rc genhtml_branch_coverage=1 00:32:07.127 --rc genhtml_function_coverage=1 00:32:07.127 --rc genhtml_legend=1 00:32:07.127 --rc geninfo_all_blocks=1 00:32:07.127 --rc geninfo_unexecuted_blocks=1 00:32:07.127 00:32:07.127 ' 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:07.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.127 --rc genhtml_branch_coverage=1 00:32:07.127 --rc genhtml_function_coverage=1 00:32:07.127 --rc genhtml_legend=1 00:32:07.127 --rc geninfo_all_blocks=1 00:32:07.127 --rc geninfo_unexecuted_blocks=1 00:32:07.127 00:32:07.127 ' 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:07.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.127 --rc genhtml_branch_coverage=1 00:32:07.127 --rc genhtml_function_coverage=1 00:32:07.127 --rc genhtml_legend=1 00:32:07.127 --rc geninfo_all_blocks=1 00:32:07.127 --rc geninfo_unexecuted_blocks=1 00:32:07.127 00:32:07.127 ' 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:07.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.127 --rc genhtml_branch_coverage=1 00:32:07.127 --rc genhtml_function_coverage=1 00:32:07.127 --rc genhtml_legend=1 00:32:07.127 --rc geninfo_all_blocks=1 00:32:07.127 --rc geninfo_unexecuted_blocks=1 00:32:07.127 00:32:07.127 ' 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:07.127 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:07.390 14:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:15.533 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:15.534 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:15.534 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:15.534 Found net devices under 0000:31:00.0: cvl_0_0 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:15.534 Found net devices under 0000:31:00.1: cvl_0_1 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:15.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:15.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:32:15.534 00:32:15.534 --- 10.0.0.2 ping statistics --- 00:32:15.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.534 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:15.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:15.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:32:15.534 00:32:15.534 --- 10.0.0.1 ping statistics --- 00:32:15.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.534 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:15.534 14:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:15.534 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:15.534 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:15.534 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:15.534 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:15.534 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2642068 00:32:15.534 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2642068 00:32:15.535 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:15.535 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 2642068 ']' 00:32:15.535 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.535 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:15.535 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.535 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:15.535 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:15.535 [2024-11-06 14:14:01.107514] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:15.535 [2024-11-06 14:14:01.108645] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:32:15.535 [2024-11-06 14:14:01.108696] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:15.535 [2024-11-06 14:14:01.209991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.535 [2024-11-06 14:14:01.260578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:15.535 [2024-11-06 14:14:01.260630] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:15.535 [2024-11-06 14:14:01.260639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:15.535 [2024-11-06 14:14:01.260646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:15.535 [2024-11-06 14:14:01.260652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:15.535 [2024-11-06 14:14:01.261424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.535 [2024-11-06 14:14:01.339448] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:15.535 [2024-11-06 14:14:01.339738] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:15.796 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:15.796 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:32:15.796 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:15.796 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:15.796 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:15.796 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:15.796 14:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:16.057 [2024-11-06 14:14:02.146323] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:16.057 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:16.057 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:16.057 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:16.057 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:16.057 ************************************ 00:32:16.057 START TEST lvs_grow_clean 00:32:16.057 ************************************ 00:32:16.057 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:32:16.057 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:16.057 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:16.057 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:16.057 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:16.057 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:16.057 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:16.057 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:16.057 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:16.057 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:16.317 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:16.317 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:16.577 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=61c71379-d93b-4c42-a37e-aec0cdf010e2 00:32:16.577 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c71379-d93b-4c42-a37e-aec0cdf010e2 00:32:16.577 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:16.577 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:16.577 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:16.577 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 61c71379-d93b-4c42-a37e-aec0cdf010e2 lvol 150 00:32:16.836 14:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d65efd69-8288-4b62-985b-5bc80ed75708 00:32:16.837 14:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:16.837 14:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:17.098 [2024-11-06 14:14:03.169992] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:17.098 [2024-11-06 14:14:03.170160] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:17.098 true 00:32:17.098 14:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c71379-d93b-4c42-a37e-aec0cdf010e2 00:32:17.098 14:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:17.098 14:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:17.098 14:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:17.359 14:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d65efd69-8288-4b62-985b-5bc80ed75708 00:32:17.619 14:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:17.880 [2024-11-06 14:14:03.910675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.880 14:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:17.880 14:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2642628 00:32:17.880 14:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:17.880 14:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:17.880 14:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2642628 /var/tmp/bdevperf.sock 00:32:17.880 14:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 2642628 ']' 00:32:17.880 14:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:17.880 14:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:17.880 14:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:17.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:17.880 14:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:17.880 14:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:18.141 [2024-11-06 14:14:04.161980] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:32:18.142 [2024-11-06 14:14:04.162053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2642628 ] 00:32:18.142 [2024-11-06 14:14:04.254147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.142 [2024-11-06 14:14:04.306573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.713 14:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:18.713 14:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:32:18.713 14:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:19.284 Nvme0n1 00:32:19.284 14:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:19.284 [ 00:32:19.284 { 00:32:19.284 "name": "Nvme0n1", 00:32:19.284 "aliases": [ 00:32:19.284 "d65efd69-8288-4b62-985b-5bc80ed75708" 00:32:19.284 ], 00:32:19.284 "product_name": "NVMe disk", 00:32:19.284 "block_size": 4096, 00:32:19.284 "num_blocks": 38912, 00:32:19.284 "uuid": "d65efd69-8288-4b62-985b-5bc80ed75708", 00:32:19.284 "numa_id": 0, 00:32:19.284 "assigned_rate_limits": { 00:32:19.284 "rw_ios_per_sec": 0, 00:32:19.284 "rw_mbytes_per_sec": 0, 00:32:19.284 "r_mbytes_per_sec": 0, 00:32:19.284 "w_mbytes_per_sec": 0 00:32:19.284 }, 00:32:19.284 "claimed": false, 00:32:19.284 "zoned": false, 00:32:19.284 "supported_io_types": { 00:32:19.284 "read": true, 00:32:19.284 "write": true, 00:32:19.284 "unmap": true, 00:32:19.284 "flush": true, 00:32:19.284 "reset": true, 00:32:19.284 "nvme_admin": true, 00:32:19.284 "nvme_io": true, 00:32:19.284 "nvme_io_md": false, 00:32:19.284 "write_zeroes": true, 00:32:19.284 "zcopy": false, 00:32:19.284 "get_zone_info": false, 00:32:19.284 "zone_management": false, 00:32:19.284 "zone_append": false, 00:32:19.284 "compare": true, 00:32:19.284 "compare_and_write": true, 00:32:19.284 "abort": true, 00:32:19.284 "seek_hole": false, 00:32:19.284 "seek_data": false, 00:32:19.284 "copy": true, 00:32:19.284 "nvme_iov_md": false 00:32:19.284 }, 00:32:19.284 "memory_domains": [ 00:32:19.284 { 00:32:19.284 "dma_device_id": "system", 00:32:19.284 "dma_device_type": 1 00:32:19.284 } 00:32:19.284 ], 00:32:19.284 "driver_specific": { 00:32:19.284 "nvme": [ 00:32:19.284 { 00:32:19.284 "trid": { 00:32:19.284 "trtype": "TCP", 00:32:19.284 "adrfam": "IPv4", 00:32:19.284 "traddr": "10.0.0.2", 00:32:19.284 "trsvcid": "4420", 00:32:19.284 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:19.284 }, 00:32:19.284 "ctrlr_data": { 00:32:19.285 "cntlid": 1, 00:32:19.285 "vendor_id": "0x8086", 00:32:19.285 "model_number": "SPDK bdev Controller", 00:32:19.285 "serial_number": "SPDK0", 00:32:19.285 "firmware_revision": "25.01", 00:32:19.285 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:19.285 "oacs": { 00:32:19.285 "security": 0, 00:32:19.285 "format": 0, 00:32:19.285 "firmware": 0, 00:32:19.285 "ns_manage": 0 00:32:19.285 }, 00:32:19.285 "multi_ctrlr": true, 00:32:19.285 "ana_reporting": false 00:32:19.285 }, 00:32:19.285 "vs": { 00:32:19.285 "nvme_version": "1.3" 00:32:19.285 }, 00:32:19.285 "ns_data": { 00:32:19.285 "id": 1, 00:32:19.285 "can_share": true 00:32:19.285 } 00:32:19.285 } 00:32:19.285 ], 00:32:19.285 "mp_policy": "active_passive" 00:32:19.285 } 00:32:19.285 } 00:32:19.285 ] 00:32:19.548 14:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:19.548 14:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2642808 00:32:19.548 14:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:19.548 Running I/O for 10 seconds... 00:32:20.494 Latency(us) 00:32:20.494 [2024-11-06T13:14:06.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:20.494 Nvme0n1 : 1.00 16256.00 63.50 0.00 0.00 0.00 0.00 0.00 00:32:20.494 [2024-11-06T13:14:06.774Z] =================================================================================================================== 00:32:20.494 [2024-11-06T13:14:06.774Z] Total : 16256.00 63.50 0.00 0.00 0.00 0.00 0.00 00:32:20.494 00:32:21.437 14:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 61c71379-d93b-4c42-a37e-aec0cdf010e2 00:32:21.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:21.437 Nvme0n1 : 2.00 16605.50 64.87 0.00 0.00 0.00 0.00 0.00 00:32:21.437 [2024-11-06T13:14:07.717Z] =================================================================================================================== 00:32:21.437 [2024-11-06T13:14:07.718Z] Total : 16605.50 64.87 0.00 0.00 0.00 0.00 0.00 00:32:21.438 00:32:21.698 true 00:32:21.698 14:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c71379-d93b-4c42-a37e-aec0cdf010e2 00:32:21.698 14:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:21.698 14:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:21.698 14:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:21.698 14:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2642808 00:32:22.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:22.638 Nvme0n1 : 3.00 16912.33 66.06 0.00 0.00 0.00 0.00 0.00 00:32:22.638 [2024-11-06T13:14:08.918Z] =================================================================================================================== 00:32:22.638 [2024-11-06T13:14:08.918Z] Total : 16912.33 66.06 0.00 0.00 0.00 0.00 0.00 00:32:22.638 00:32:23.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:23.577 Nvme0n1 : 4.00 17145.00 66.97 0.00 0.00 0.00 0.00 0.00 00:32:23.577 [2024-11-06T13:14:09.857Z] =================================================================================================================== 00:32:23.577 [2024-11-06T13:14:09.857Z] Total : 17145.00 66.97 0.00 0.00 0.00 0.00 0.00 00:32:23.577 00:32:24.516 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:24.516 Nvme0n1 : 5.00 18656.60 72.88 0.00 0.00 0.00 0.00 0.00 00:32:24.516 [2024-11-06T13:14:10.796Z] =================================================================================================================== 00:32:24.516 [2024-11-06T13:14:10.796Z] Total : 18656.60 72.88 0.00 0.00 0.00 0.00 0.00 00:32:24.516 00:32:25.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:25.458 Nvme0n1 : 6.00 19674.67 76.85 0.00 0.00 0.00 0.00 0.00 00:32:25.458 [2024-11-06T13:14:11.738Z] =================================================================================================================== 00:32:25.458 [2024-11-06T13:14:11.738Z] Total : 19674.67 76.85 0.00 0.00 0.00 0.00 0.00 00:32:25.458 00:32:26.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:26.399 Nvme0n1 : 7.00 20411.00 79.73 0.00 0.00 0.00 0.00 0.00 00:32:26.399 [2024-11-06T13:14:12.679Z] =================================================================================================================== 00:32:26.399 [2024-11-06T13:14:12.679Z] Total : 20411.00 79.73 0.00 0.00 0.00 0.00 0.00 00:32:26.399 00:32:27.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:27.782 Nvme0n1 : 8.00 20963.25 81.89 0.00 0.00 0.00 0.00 0.00 00:32:27.782 [2024-11-06T13:14:14.062Z] =================================================================================================================== 00:32:27.782 [2024-11-06T13:14:14.062Z] Total : 20963.25 81.89 0.00 0.00 0.00 0.00 0.00 00:32:27.782 00:32:28.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:28.722 Nvme0n1 : 9.00 21398.11 83.59 0.00 0.00 0.00 0.00 0.00 00:32:28.722 [2024-11-06T13:14:15.002Z] =================================================================================================================== 00:32:28.722 [2024-11-06T13:14:15.002Z] Total : 21398.11 83.59 0.00 0.00 0.00 0.00 0.00 00:32:28.722 00:32:29.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:29.662 Nvme0n1 : 10.00 21734.80 84.90 0.00 0.00 0.00 0.00 0.00 00:32:29.662 [2024-11-06T13:14:15.942Z] =================================================================================================================== 00:32:29.662 [2024-11-06T13:14:15.942Z] Total : 21734.80 84.90 0.00 0.00 0.00 0.00 0.00 00:32:29.662 00:32:29.662 00:32:29.662 Latency(us) 00:32:29.662 [2024-11-06T13:14:15.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:29.662 Nvme0n1 : 10.00 21738.58 84.92 0.00 0.00 5885.00 2880.85 33204.91 00:32:29.662 [2024-11-06T13:14:15.942Z] =================================================================================================================== 00:32:29.662 [2024-11-06T13:14:15.942Z] Total : 21738.58 84.92 0.00 0.00 5885.00 2880.85 33204.91 00:32:29.662 { 00:32:29.662 "results": [ 00:32:29.662 { 00:32:29.662 "job": "Nvme0n1", 00:32:29.662 "core_mask": "0x2", 00:32:29.662 "workload": "randwrite", 00:32:29.662 "status": "finished", 00:32:29.662 "queue_depth": 128, 00:32:29.662 "io_size": 4096, 00:32:29.662 "runtime": 10.004148, 00:32:29.662 "iops": 21738.582835839694, 00:32:29.662 "mibps": 84.9163392024988, 00:32:29.662 "io_failed": 0, 00:32:29.662 "io_timeout": 0, 00:32:29.662 "avg_latency_us": 5884.996280478459, 00:32:29.662 "min_latency_us": 2880.8533333333335, 00:32:29.662 "max_latency_us": 33204.90666666667 00:32:29.662 } 00:32:29.662 ], 00:32:29.662 "core_count": 1 00:32:29.662 } 00:32:29.663 14:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2642628 00:32:29.663 14:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 2642628 ']' 00:32:29.663 14:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 2642628 00:32:29.663 14:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:32:29.663 14:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:29.663 14:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2642628 00:32:29.663 14:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:29.663 14:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:29.663 14:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2642628' 00:32:29.663 killing process with pid 2642628 00:32:29.663 14:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 2642628 00:32:29.663 Received shutdown signal, test time was about 10.000000 seconds 00:32:29.663 00:32:29.663 Latency(us) 00:32:29.663 [2024-11-06T13:14:15.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.663 [2024-11-06T13:14:15.943Z] =================================================================================================================== 00:32:29.663 [2024-11-06T13:14:15.943Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:29.663 14:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 2642628 00:32:29.663 14:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:29.923 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:30.183 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c71379-d93b-4c42-a37e-aec0cdf010e2 00:32:30.183 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:30.183 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:30.183 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:30.183 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:30.444 [2024-11-06 14:14:16.558080] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:30.444 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c71379-d93b-4c42-a37e-aec0cdf010e2 00:32:30.444 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:32:30.444 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c71379-d93b-4c42-a37e-aec0cdf010e2 00:32:30.444 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:30.444 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:30.444 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:30.444 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:30.444 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:30.444 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:30.444 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:30.444 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:30.444 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c71379-d93b-4c42-a37e-aec0cdf010e2 00:32:30.706 request: 00:32:30.706 { 00:32:30.706 "uuid": "61c71379-d93b-4c42-a37e-aec0cdf010e2", 00:32:30.706 "method": "bdev_lvol_get_lvstores", 00:32:30.706 "req_id": 1 00:32:30.706 } 00:32:30.706 Got JSON-RPC error response 00:32:30.706 response: 00:32:30.706 { 00:32:30.706 "code": -19, 00:32:30.706 "message": "No such device" 00:32:30.706 } 00:32:30.706 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:32:30.706 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:30.706 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:30.706 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:30.706 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:30.706 aio_bdev 00:32:30.706 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d65efd69-8288-4b62-985b-5bc80ed75708 00:32:30.706 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=d65efd69-8288-4b62-985b-5bc80ed75708 00:32:30.706 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:30.706 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:32:30.706 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:30.706 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:30.706 14:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:30.966 14:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d65efd69-8288-4b62-985b-5bc80ed75708 -t 2000 00:32:31.227 [ 00:32:31.227 { 00:32:31.227 "name": "d65efd69-8288-4b62-985b-5bc80ed75708", 00:32:31.227 "aliases": [ 00:32:31.227 "lvs/lvol" 00:32:31.227 ], 00:32:31.227 "product_name": "Logical Volume", 00:32:31.227 "block_size": 4096, 00:32:31.227 "num_blocks": 38912, 00:32:31.227 "uuid": "d65efd69-8288-4b62-985b-5bc80ed75708", 00:32:31.227 "assigned_rate_limits": { 00:32:31.227 "rw_ios_per_sec": 0, 00:32:31.227 "rw_mbytes_per_sec": 0, 00:32:31.227 "r_mbytes_per_sec": 0, 00:32:31.227 "w_mbytes_per_sec": 0 00:32:31.227 }, 00:32:31.227 "claimed": false, 00:32:31.227 "zoned": false, 00:32:31.227 "supported_io_types": { 00:32:31.227 "read": true, 00:32:31.227 "write": true, 00:32:31.227 "unmap": true, 00:32:31.227 "flush": false, 00:32:31.227 "reset": true, 00:32:31.227 "nvme_admin": false, 00:32:31.227 "nvme_io": false, 00:32:31.227 "nvme_io_md": false, 00:32:31.227 "write_zeroes": true, 00:32:31.227 "zcopy": false, 00:32:31.227 "get_zone_info": false, 00:32:31.227 "zone_management": false, 00:32:31.227 "zone_append": false, 00:32:31.227 "compare": false, 00:32:31.227 "compare_and_write": false, 00:32:31.227 "abort": false, 00:32:31.227 "seek_hole": true, 00:32:31.227 "seek_data": true, 00:32:31.227 "copy": false, 00:32:31.227 "nvme_iov_md": false 00:32:31.227 }, 00:32:31.227 "driver_specific": { 00:32:31.227 "lvol": { 00:32:31.227 "lvol_store_uuid": "61c71379-d93b-4c42-a37e-aec0cdf010e2", 00:32:31.227 "base_bdev": "aio_bdev", 00:32:31.227 "thin_provision": false, 00:32:31.227 "num_allocated_clusters": 38, 00:32:31.227 "snapshot": false, 00:32:31.227 "clone": false, 00:32:31.227 "esnap_clone": false 00:32:31.227 } 00:32:31.227 } 00:32:31.227 } 00:32:31.227 ] 00:32:31.227 14:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:32:31.227 14:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c71379-d93b-4c42-a37e-aec0cdf010e2 00:32:31.227 14:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:31.487 14:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:31.487 14:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c71379-d93b-4c42-a37e-aec0cdf010e2 00:32:31.487 14:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:31.487 14:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:31.487 14:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d65efd69-8288-4b62-985b-5bc80ed75708 00:32:31.747 14:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 61c71379-d93b-4c42-a37e-aec0cdf010e2 00:32:32.008 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:32.008 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:32.008 00:32:32.008 real 0m16.013s 00:32:32.008 user 0m15.622s 00:32:32.008 sys 0m1.461s 00:32:32.008 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:32.008 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:32.008 ************************************ 00:32:32.008 END TEST lvs_grow_clean 00:32:32.008 ************************************ 00:32:32.008 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:32.008 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:32.008 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:32.008 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:32.268 ************************************ 00:32:32.268 START TEST lvs_grow_dirty 00:32:32.268 ************************************ 00:32:32.268 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:32:32.268 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:32.268 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:32.268 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:32.268 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:32.268 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:32.268 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:32.268 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:32.268 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:32.268 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:32.268 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:32.268 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:32.529 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=605adcc8-d3f7-4bc2-9b2c-c6dad3a1abe7 00:32:32.529 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 605adcc8-d3f7-4bc2-9b2c-c6dad3a1abe7 00:32:32.529 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:32.789 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:32.789 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:32.789 14:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 605adcc8-d3f7-4bc2-9b2c-c6dad3a1abe7 lvol 150 00:32:33.050 14:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1211c861-18f3-4dd1-9821-3cd695d042cb 00:32:33.050 14:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:33.050 14:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:33.050 [2024-11-06 14:14:19.246004] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:33.050 [2024-11-06 14:14:19.246174] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:33.050 true 00:32:33.050 14:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 605adcc8-d3f7-4bc2-9b2c-c6dad3a1abe7 00:32:33.050 14:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:33.310 14:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:33.310 14:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:33.310 14:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1211c861-18f3-4dd1-9821-3cd695d042cb 00:32:33.571 14:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:33.832 [2024-11-06 14:14:19.914589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:33.832 14:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:33.832 14:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:33.832 14:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2645561 00:32:33.832 14:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:33.832 14:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2645561 /var/tmp/bdevperf.sock 00:32:33.832 14:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2645561 ']' 00:32:33.832 14:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:33.832 14:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:33.832 14:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:33.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:33.832 14:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:33.832 14:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:34.094 [2024-11-06 14:14:20.140653] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:32:34.094 [2024-11-06 14:14:20.140724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2645561 ] 00:32:34.094 [2024-11-06 14:14:20.231141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.094 [2024-11-06 14:14:20.266318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.094 14:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:34.094 14:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:34.094 14:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:34.664 Nvme0n1 00:32:34.664 14:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:34.664 [ 00:32:34.664 { 00:32:34.664 "name": "Nvme0n1", 00:32:34.664 "aliases": [ 00:32:34.664 "1211c861-18f3-4dd1-9821-3cd695d042cb" 00:32:34.664 ], 00:32:34.664 "product_name": "NVMe disk", 00:32:34.664 "block_size": 4096, 00:32:34.664 "num_blocks": 38912, 00:32:34.664 "uuid": "1211c861-18f3-4dd1-9821-3cd695d042cb", 00:32:34.664 "numa_id": 0, 00:32:34.664 "assigned_rate_limits": { 00:32:34.664 "rw_ios_per_sec": 0, 00:32:34.664 "rw_mbytes_per_sec": 0, 00:32:34.664 "r_mbytes_per_sec": 0, 00:32:34.664 "w_mbytes_per_sec": 0 00:32:34.664 }, 00:32:34.664 "claimed": false, 00:32:34.664 "zoned": false, 00:32:34.664 "supported_io_types": { 00:32:34.664 "read": true, 00:32:34.664 "write": true, 00:32:34.664 "unmap": true, 00:32:34.664 "flush": true, 00:32:34.664 "reset": true, 00:32:34.664 "nvme_admin": true, 00:32:34.664 "nvme_io": true, 00:32:34.664 "nvme_io_md": false, 00:32:34.664 "write_zeroes": true, 00:32:34.664 "zcopy": false, 00:32:34.664 "get_zone_info": false, 00:32:34.664 "zone_management": false, 00:32:34.664 "zone_append": false, 00:32:34.664 "compare": true, 00:32:34.664 "compare_and_write": true, 00:32:34.664 "abort": true, 00:32:34.664 "seek_hole": false, 00:32:34.664 "seek_data": false, 00:32:34.664 "copy": true, 00:32:34.664 "nvme_iov_md": false 00:32:34.664 }, 00:32:34.664 "memory_domains": [ 00:32:34.664 { 00:32:34.664 "dma_device_id": "system", 00:32:34.664 "dma_device_type": 1 00:32:34.664 } 00:32:34.664 ], 00:32:34.664 "driver_specific": { 00:32:34.664 "nvme": [ 00:32:34.664 { 00:32:34.664 "trid": { 00:32:34.664 "trtype": "TCP", 00:32:34.664 "adrfam": "IPv4", 00:32:34.664 "traddr": "10.0.0.2", 00:32:34.664 "trsvcid": "4420", 00:32:34.664 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:34.664 }, 00:32:34.664 "ctrlr_data": { 00:32:34.664 "cntlid": 1, 00:32:34.664 "vendor_id": "0x8086", 00:32:34.664 "model_number": "SPDK bdev Controller", 00:32:34.664 "serial_number": "SPDK0", 00:32:34.664 "firmware_revision": "25.01", 00:32:34.664 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:34.664 "oacs": { 00:32:34.664 "security": 0, 00:32:34.664 "format": 0, 00:32:34.664 "firmware": 0, 00:32:34.664 "ns_manage": 0 00:32:34.664 }, 00:32:34.664 "multi_ctrlr": true, 00:32:34.664 "ana_reporting": false 00:32:34.664 }, 00:32:34.664 "vs": { 00:32:34.664 "nvme_version": "1.3" 00:32:34.664 }, 00:32:34.664 "ns_data": { 00:32:34.664 "id": 1, 00:32:34.664 "can_share": true 00:32:34.664 } 00:32:34.664 } 00:32:34.664 ], 00:32:34.665 "mp_policy": "active_passive" 00:32:34.665 } 00:32:34.665 } 00:32:34.665 ] 00:32:34.665 14:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2645852 00:32:34.665 14:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:34.665 14:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:34.665 Running I/O for 10 seconds... 00:32:36.048 Latency(us) 00:32:36.048 [2024-11-06T13:14:22.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:36.048 Nvme0n1 : 1.00 17018.00 66.48 0.00 0.00 0.00 0.00 0.00 00:32:36.048 [2024-11-06T13:14:22.328Z] =================================================================================================================== 00:32:36.048 [2024-11-06T13:14:22.328Z] Total : 17018.00 66.48 0.00 0.00 0.00 0.00 0.00 00:32:36.048 00:32:36.618 14:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 605adcc8-d3f7-4bc2-9b2c-c6dad3a1abe7 00:32:36.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:36.878 Nvme0n1 : 2.00 17272.00 67.47 0.00 0.00 0.00 0.00 0.00 00:32:36.878 [2024-11-06T13:14:23.158Z] =================================================================================================================== 00:32:36.878 [2024-11-06T13:14:23.158Z] Total : 17272.00 67.47 0.00 0.00 0.00 0.00 0.00 00:32:36.878 00:32:36.878 true 00:32:36.878 14:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 605adcc8-d3f7-4bc2-9b2c-c6dad3a1abe7 00:32:36.878 14:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:37.138 14:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:37.138 14:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:37.138 14:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2645852 00:32:37.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:37.708 Nvme0n1 : 3.00 17356.67 67.80 0.00 0.00 0.00 0.00 0.00 00:32:37.708 [2024-11-06T13:14:23.988Z] =================================================================================================================== 00:32:37.708 [2024-11-06T13:14:23.988Z] Total : 17356.67 67.80 0.00 0.00 0.00 0.00 0.00 00:32:37.708 00:32:39.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:39.102 Nvme0n1 : 4.00 17430.75 68.09 0.00 0.00 0.00 0.00 0.00 00:32:39.102 [2024-11-06T13:14:25.382Z] =================================================================================================================== 00:32:39.102 [2024-11-06T13:14:25.382Z] Total : 17430.75 68.09 0.00 0.00 0.00 0.00 0.00 00:32:39.102 00:32:39.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:39.723 Nvme0n1 : 5.00 18161.00 70.94 0.00 0.00 0.00 0.00 0.00 00:32:39.723 [2024-11-06T13:14:26.003Z] =================================================================================================================== 00:32:39.723 [2024-11-06T13:14:26.003Z] Total : 18161.00 70.94 0.00 0.00 0.00 0.00 0.00 00:32:39.723 00:32:40.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:40.681 Nvme0n1 : 6.00 19261.67 75.24 0.00 0.00 0.00 0.00 0.00 00:32:40.681 [2024-11-06T13:14:26.961Z] =================================================================================================================== 00:32:40.681 [2024-11-06T13:14:26.961Z] Total : 19261.67 75.24 0.00 0.00 0.00 0.00 0.00 00:32:40.681 00:32:42.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:42.065 Nvme0n1 : 7.00 20057.00 78.35 0.00 0.00 0.00 0.00 0.00 00:32:42.065 [2024-11-06T13:14:28.345Z] =================================================================================================================== 00:32:42.065 [2024-11-06T13:14:28.345Z] Total : 20057.00 78.35 0.00 0.00 0.00 0.00 0.00 00:32:42.065 00:32:43.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:43.005 Nvme0n1 : 8.00 20659.62 80.70 0.00 0.00 0.00 0.00 0.00 00:32:43.005 [2024-11-06T13:14:29.285Z] =================================================================================================================== 00:32:43.005 [2024-11-06T13:14:29.285Z] Total : 20659.62 80.70 0.00 0.00 0.00 0.00 0.00 00:32:43.005 00:32:43.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:43.947 Nvme0n1 : 9.00 21117.67 82.49 0.00 0.00 0.00 0.00 0.00 00:32:43.947 [2024-11-06T13:14:30.227Z] =================================================================================================================== 00:32:43.947 [2024-11-06T13:14:30.227Z] Total : 21117.67 82.49 0.00 0.00 0.00 0.00 0.00 00:32:43.947 00:32:44.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:44.890 Nvme0n1 : 10.00 21482.40 83.92 0.00 0.00 0.00 0.00 0.00 00:32:44.890 [2024-11-06T13:14:31.170Z] =================================================================================================================== 00:32:44.890 [2024-11-06T13:14:31.170Z] Total : 21482.40 83.92 0.00 0.00 0.00 0.00 0.00 00:32:44.890 00:32:44.890 00:32:44.890 Latency(us) 00:32:44.890 [2024-11-06T13:14:31.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:44.890 Nvme0n1 : 10.00 21489.93 83.95 0.00 0.00 5953.71 2908.16 31020.37 00:32:44.890 [2024-11-06T13:14:31.170Z] =================================================================================================================== 00:32:44.890 [2024-11-06T13:14:31.170Z] Total : 21489.93 83.95 0.00 0.00 5953.71 2908.16 31020.37 00:32:44.890 { 00:32:44.890 "results": [ 00:32:44.890 { 00:32:44.890 "job": "Nvme0n1", 00:32:44.890 "core_mask": "0x2", 00:32:44.890 "workload": "randwrite", 00:32:44.890 "status": "finished", 00:32:44.890 "queue_depth": 128, 00:32:44.890 "io_size": 4096, 00:32:44.890 "runtime": 10.002454, 00:32:44.890 "iops": 21489.926372068294, 00:32:44.890 "mibps": 83.94502489089177, 00:32:44.890 "io_failed": 0, 00:32:44.890 "io_timeout": 0, 00:32:44.890 "avg_latency_us": 5953.708921557682, 00:32:44.890 "min_latency_us": 2908.16, 00:32:44.890 "max_latency_us": 31020.373333333333 00:32:44.890 } 00:32:44.890 ], 00:32:44.890 "core_count": 1 00:32:44.890 } 00:32:44.890 14:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2645561 00:32:44.890 14:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 2645561 ']' 00:32:44.890 14:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 2645561 00:32:44.890 14:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:32:44.890 14:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:44.890 14:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2645561 00:32:44.890 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:44.890 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:44.890 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2645561' 00:32:44.890 killing process with pid 2645561 00:32:44.890 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 2645561 00:32:44.890 Received shutdown signal, test time was about 10.000000 seconds 00:32:44.890 00:32:44.890 Latency(us) 00:32:44.890 [2024-11-06T13:14:31.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.890 [2024-11-06T13:14:31.170Z] =================================================================================================================== 00:32:44.890 [2024-11-06T13:14:31.170Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:44.890 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 2645561 00:32:44.890 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:45.151 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:45.411 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 605adcc8-d3f7-4bc2-9b2c-c6dad3a1abe7 00:32:45.411 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:45.411 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:45.411 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:45.412 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2642068 00:32:45.412 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2642068 00:32:45.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2642068 Killed "${NVMF_APP[@]}" "$@" 00:32:45.672 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:45.672 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:45.672 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:45.672 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:45.672 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:45.672 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2647876 00:32:45.672 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2647876 00:32:45.672 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:45.672 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2647876 ']' 00:32:45.672 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.672 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:45.672 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.672 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:45.672 14:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:45.672 [2024-11-06 14:14:31.776773] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:45.672 [2024-11-06 14:14:31.777777] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:32:45.672 [2024-11-06 14:14:31.777818] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:45.672 [2024-11-06 14:14:31.867144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.672 [2024-11-06 14:14:31.896379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:45.672 [2024-11-06 14:14:31.896404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:45.672 [2024-11-06 14:14:31.896409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:45.672 [2024-11-06 14:14:31.896414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:45.672 [2024-11-06 14:14:31.896418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:45.672 [2024-11-06 14:14:31.896850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.672 [2024-11-06 14:14:31.948171] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:45.672 [2024-11-06 14:14:31.948358] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:46.613 14:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:46.613 14:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:46.613 14:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:46.613 14:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:46.613 14:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:46.613 14:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:46.613 14:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:46.613 [2024-11-06 14:14:32.768435] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:46.613 [2024-11-06 14:14:32.768583] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:46.613 [2024-11-06 14:14:32.768628] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:46.613 14:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:46.613 14:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1211c861-18f3-4dd1-9821-3cd695d042cb 00:32:46.613 14:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=1211c861-18f3-4dd1-9821-3cd695d042cb 00:32:46.613 14:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:46.613 14:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:46.613 14:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:46.613 14:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:46.613 14:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:46.874 14:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1211c861-18f3-4dd1-9821-3cd695d042cb -t 2000 00:32:46.874 [ 00:32:46.874 { 00:32:46.874 "name": "1211c861-18f3-4dd1-9821-3cd695d042cb", 00:32:46.874 "aliases": [ 00:32:46.874 "lvs/lvol" 00:32:46.874 ], 00:32:46.874 "product_name": "Logical Volume", 00:32:46.874 "block_size": 4096, 00:32:46.874 "num_blocks": 38912, 00:32:46.874 "uuid": "1211c861-18f3-4dd1-9821-3cd695d042cb", 00:32:46.874 "assigned_rate_limits": { 00:32:46.874 "rw_ios_per_sec": 0, 00:32:46.874 "rw_mbytes_per_sec": 0, 00:32:46.874 "r_mbytes_per_sec": 0, 00:32:46.874 "w_mbytes_per_sec": 0 00:32:46.874 }, 00:32:46.874 "claimed": false, 00:32:46.874 "zoned": false, 00:32:46.874 "supported_io_types": { 00:32:46.874 "read": true, 00:32:46.874 "write": true, 00:32:46.874 "unmap": true, 00:32:46.874 "flush": false, 00:32:46.874 "reset": true, 00:32:46.874 "nvme_admin": false, 00:32:46.874 "nvme_io": false, 00:32:46.874 "nvme_io_md": false, 00:32:46.874 "write_zeroes": true, 00:32:46.874 "zcopy": false, 00:32:46.874 "get_zone_info": false, 00:32:46.874 "zone_management": false, 00:32:46.874 "zone_append": false, 00:32:46.874 "compare": false, 00:32:46.874 "compare_and_write": false, 00:32:46.874 "abort": false, 00:32:46.874 "seek_hole": true, 00:32:46.874 "seek_data": true, 00:32:46.874 "copy": false, 00:32:46.874 "nvme_iov_md": false 00:32:46.874 }, 00:32:46.874 "driver_specific": { 00:32:46.874 "lvol": { 00:32:46.874 "lvol_store_uuid": "605adcc8-d3f7-4bc2-9b2c-c6dad3a1abe7", 00:32:46.874 "base_bdev": "aio_bdev", 00:32:46.874 "thin_provision": false, 00:32:46.874 "num_allocated_clusters": 38, 00:32:46.874 "snapshot": false, 00:32:46.874 "clone": false, 00:32:46.874 "esnap_clone": false 00:32:46.874 } 00:32:46.874 } 00:32:46.874 } 00:32:46.874 ] 00:32:46.874 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:46.874 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 605adcc8-d3f7-4bc2-9b2c-c6dad3a1abe7 00:32:46.874 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:47.135 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:47.135 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 605adcc8-d3f7-4bc2-9b2c-c6dad3a1abe7 00:32:47.135 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:47.396 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:47.396 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:47.396 [2024-11-06 14:14:33.637314] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:47.396 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 605adcc8-d3f7-4bc2-9b2c-c6dad3a1abe7 00:32:47.396 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:32:47.396 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 605adcc8-d3f7-4bc2-9b2c-c6dad3a1abe7 00:32:47.396 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:47.396 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:47.396 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:47.657 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:47.657 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:47.657 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:47.657 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:47.657 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:47.657 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 605adcc8-d3f7-4bc2-9b2c-c6dad3a1abe7 00:32:47.657 request: 00:32:47.657 { 00:32:47.657 "uuid": "605adcc8-d3f7-4bc2-9b2c-c6dad3a1abe7", 00:32:47.657 "method": "bdev_lvol_get_lvstores", 00:32:47.657 "req_id": 1 00:32:47.657 } 00:32:47.657 Got JSON-RPC error response 00:32:47.657 response: 00:32:47.657 { 00:32:47.657 "code": -19, 00:32:47.657 "message": "No such device" 00:32:47.657 } 00:32:47.657 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:32:47.657 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:47.657 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:47.657 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:47.657 14:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:47.917 aio_bdev 00:32:47.917 14:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1211c861-18f3-4dd1-9821-3cd695d042cb 00:32:47.917 14:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=1211c861-18f3-4dd1-9821-3cd695d042cb 00:32:47.917 14:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:47.917 14:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:47.917 14:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:47.917 14:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:47.917 14:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:47.917 14:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1211c861-18f3-4dd1-9821-3cd695d042cb -t 2000 00:32:48.178 [ 00:32:48.178 { 00:32:48.178 "name": "1211c861-18f3-4dd1-9821-3cd695d042cb", 00:32:48.178 "aliases": [ 00:32:48.178 "lvs/lvol" 00:32:48.178 ], 00:32:48.178 "product_name": "Logical Volume", 00:32:48.178 "block_size": 4096, 00:32:48.178 "num_blocks": 38912, 00:32:48.178 "uuid": "1211c861-18f3-4dd1-9821-3cd695d042cb", 00:32:48.178 "assigned_rate_limits": { 00:32:48.178 "rw_ios_per_sec": 0, 00:32:48.178 "rw_mbytes_per_sec": 0, 00:32:48.178 "r_mbytes_per_sec": 0, 00:32:48.178 "w_mbytes_per_sec": 0 00:32:48.178 }, 00:32:48.178 "claimed": false, 00:32:48.178 "zoned": false, 00:32:48.178 "supported_io_types": { 00:32:48.178 "read": true, 00:32:48.178 "write": true, 00:32:48.178 "unmap": true, 00:32:48.178 "flush": false, 00:32:48.178 "reset": true, 00:32:48.178 "nvme_admin": false, 00:32:48.178 "nvme_io": false, 00:32:48.178 "nvme_io_md": false, 00:32:48.178 "write_zeroes": true, 00:32:48.178 "zcopy": false, 00:32:48.178 "get_zone_info": false, 00:32:48.178 "zone_management": false, 00:32:48.178 "zone_append": false, 00:32:48.178 "compare": false, 00:32:48.178 "compare_and_write": false, 00:32:48.178 "abort": false, 00:32:48.178 "seek_hole": true, 00:32:48.178 "seek_data": true, 00:32:48.178 "copy": false, 00:32:48.178 "nvme_iov_md": false 00:32:48.178 }, 00:32:48.178 "driver_specific": { 00:32:48.178 "lvol": { 00:32:48.178 "lvol_store_uuid": "605adcc8-d3f7-4bc2-9b2c-c6dad3a1abe7", 00:32:48.178 "base_bdev": "aio_bdev", 00:32:48.178 "thin_provision": false, 00:32:48.178 "num_allocated_clusters": 38, 00:32:48.178 "snapshot": false, 00:32:48.178 "clone": false, 00:32:48.178 "esnap_clone": false 00:32:48.178 } 00:32:48.178 } 00:32:48.178 } 00:32:48.178 ] 00:32:48.178 14:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:48.178 14:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 605adcc8-d3f7-4bc2-9b2c-c6dad3a1abe7 00:32:48.178 14:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:48.439 14:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:48.439 14:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 605adcc8-d3f7-4bc2-9b2c-c6dad3a1abe7 00:32:48.439 14:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:48.439 14:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:48.439 14:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1211c861-18f3-4dd1-9821-3cd695d042cb 00:32:48.700 14:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 605adcc8-d3f7-4bc2-9b2c-c6dad3a1abe7 00:32:48.960 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:48.960 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:48.960 00:32:48.960 real 0m16.898s 00:32:48.960 user 0m34.729s 00:32:48.960 sys 0m3.026s 00:32:48.960 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:48.960 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:48.960 ************************************ 00:32:48.960 END TEST lvs_grow_dirty 00:32:48.960 ************************************ 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:49.222 nvmf_trace.0 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:49.222 rmmod nvme_tcp 00:32:49.222 rmmod nvme_fabrics 00:32:49.222 rmmod nvme_keyring 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2647876 ']' 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2647876 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 2647876 ']' 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 2647876 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:49.222 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2647876 00:32:49.223 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:49.223 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:49.223 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2647876' 00:32:49.223 killing process with pid 2647876 00:32:49.223 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 2647876 00:32:49.223 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 2647876 00:32:49.484 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:49.484 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:49.484 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:49.484 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:49.484 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:49.484 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:49.484 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:49.484 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:49.484 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:49.484 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.484 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.485 14:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.397 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:51.397 00:32:51.397 real 0m44.466s 00:32:51.397 user 0m53.274s 00:32:51.397 sys 0m10.816s 00:32:51.397 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:51.397 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:51.397 ************************************ 00:32:51.397 END TEST nvmf_lvs_grow 00:32:51.397 ************************************ 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:51.659 ************************************ 00:32:51.659 START TEST nvmf_bdev_io_wait 00:32:51.659 ************************************ 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:51.659 * Looking for test storage... 00:32:51.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:51.659 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:51.920 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:51.920 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:51.920 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:51.920 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:51.920 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:51.920 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:51.920 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:51.920 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:51.920 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:51.920 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:51.920 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:51.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.921 --rc genhtml_branch_coverage=1 00:32:51.921 --rc genhtml_function_coverage=1 00:32:51.921 --rc genhtml_legend=1 00:32:51.921 --rc geninfo_all_blocks=1 00:32:51.921 --rc geninfo_unexecuted_blocks=1 00:32:51.921 00:32:51.921 ' 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:51.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.921 --rc genhtml_branch_coverage=1 00:32:51.921 --rc genhtml_function_coverage=1 00:32:51.921 --rc genhtml_legend=1 00:32:51.921 --rc geninfo_all_blocks=1 00:32:51.921 --rc geninfo_unexecuted_blocks=1 00:32:51.921 00:32:51.921 ' 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:51.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.921 --rc genhtml_branch_coverage=1 00:32:51.921 --rc genhtml_function_coverage=1 00:32:51.921 --rc genhtml_legend=1 00:32:51.921 --rc geninfo_all_blocks=1 00:32:51.921 --rc geninfo_unexecuted_blocks=1 00:32:51.921 00:32:51.921 ' 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:51.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.921 --rc genhtml_branch_coverage=1 00:32:51.921 --rc genhtml_function_coverage=1 00:32:51.921 --rc genhtml_legend=1 00:32:51.921 --rc geninfo_all_blocks=1 00:32:51.921 --rc geninfo_unexecuted_blocks=1 00:32:51.921 00:32:51.921 ' 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:51.921 14:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.064 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:00.064 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:00.064 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:00.064 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:00.064 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:00.064 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:00.064 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:00.064 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:00.064 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:00.064 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:00.064 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:00.064 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:00.064 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:00.064 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:00.064 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:00.064 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:00.065 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:00.065 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:00.065 Found net devices under 0000:31:00.0: cvl_0_0 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:00.065 Found net devices under 0000:31:00.1: cvl_0_1 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:00.065 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:00.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:00.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:33:00.066 00:33:00.066 --- 10.0.0.2 ping statistics --- 00:33:00.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.066 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:00.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:00.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:33:00.066 00:33:00.066 --- 10.0.0.1 ping statistics --- 00:33:00.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.066 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2652690 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2652690 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 2652690 ']' 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:00.066 14:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.066 [2024-11-06 14:14:45.661146] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:00.066 [2024-11-06 14:14:45.662289] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:33:00.066 [2024-11-06 14:14:45.662340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.066 [2024-11-06 14:14:45.762354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:00.066 [2024-11-06 14:14:45.817933] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.066 [2024-11-06 14:14:45.817985] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.066 [2024-11-06 14:14:45.817994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.066 [2024-11-06 14:14:45.818001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.066 [2024-11-06 14:14:45.818008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.066 [2024-11-06 14:14:45.820067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.066 [2024-11-06 14:14:45.820199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:00.066 [2024-11-06 14:14:45.820356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.066 [2024-11-06 14:14:45.820357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:00.066 [2024-11-06 14:14:45.820860] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:00.327 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:00.327 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:33:00.327 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:00.327 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:00.327 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.327 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:00.327 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:00.327 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.327 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.327 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.327 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:00.327 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.327 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.327 [2024-11-06 14:14:46.588884] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:00.327 [2024-11-06 14:14:46.589477] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:00.327 [2024-11-06 14:14:46.589605] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:00.327 [2024-11-06 14:14:46.589786] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:00.327 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.327 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:00.327 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.327 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.327 [2024-11-06 14:14:46.601357] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.589 Malloc0 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.589 [2024-11-06 14:14:46.673581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2652992 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2652994 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:00.589 { 00:33:00.589 "params": { 00:33:00.589 "name": "Nvme$subsystem", 00:33:00.589 "trtype": "$TEST_TRANSPORT", 00:33:00.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:00.589 "adrfam": "ipv4", 00:33:00.589 "trsvcid": "$NVMF_PORT", 00:33:00.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:00.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:00.589 "hdgst": ${hdgst:-false}, 00:33:00.589 "ddgst": ${ddgst:-false} 00:33:00.589 }, 00:33:00.589 "method": "bdev_nvme_attach_controller" 00:33:00.589 } 00:33:00.589 EOF 00:33:00.589 )") 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2652996 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:00.589 { 00:33:00.589 "params": { 00:33:00.589 "name": "Nvme$subsystem", 00:33:00.589 "trtype": "$TEST_TRANSPORT", 00:33:00.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:00.589 "adrfam": "ipv4", 00:33:00.589 "trsvcid": "$NVMF_PORT", 00:33:00.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:00.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:00.589 "hdgst": ${hdgst:-false}, 00:33:00.589 "ddgst": ${ddgst:-false} 00:33:00.589 }, 00:33:00.589 "method": "bdev_nvme_attach_controller" 00:33:00.589 } 00:33:00.589 EOF 00:33:00.589 )") 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2652999 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:00.589 { 00:33:00.589 "params": { 00:33:00.589 "name": "Nvme$subsystem", 00:33:00.589 "trtype": "$TEST_TRANSPORT", 00:33:00.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:00.589 "adrfam": "ipv4", 00:33:00.589 "trsvcid": "$NVMF_PORT", 00:33:00.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:00.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:00.589 "hdgst": ${hdgst:-false}, 00:33:00.589 "ddgst": ${ddgst:-false} 00:33:00.589 }, 00:33:00.589 "method": "bdev_nvme_attach_controller" 00:33:00.589 } 00:33:00.589 EOF 00:33:00.589 )") 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:00.589 { 00:33:00.589 "params": { 00:33:00.589 "name": "Nvme$subsystem", 00:33:00.589 "trtype": "$TEST_TRANSPORT", 00:33:00.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:00.589 "adrfam": "ipv4", 00:33:00.589 "trsvcid": "$NVMF_PORT", 00:33:00.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:00.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:00.589 "hdgst": ${hdgst:-false}, 00:33:00.589 "ddgst": ${ddgst:-false} 00:33:00.589 }, 00:33:00.589 "method": "bdev_nvme_attach_controller" 00:33:00.589 } 00:33:00.589 EOF 00:33:00.589 )") 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2652992 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:00.589 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:00.590 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:00.590 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:00.590 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:00.590 "params": { 00:33:00.590 "name": "Nvme1", 00:33:00.590 "trtype": "tcp", 00:33:00.590 "traddr": "10.0.0.2", 00:33:00.590 "adrfam": "ipv4", 00:33:00.590 "trsvcid": "4420", 00:33:00.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:00.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:00.590 "hdgst": false, 00:33:00.590 "ddgst": false 00:33:00.590 }, 00:33:00.590 "method": "bdev_nvme_attach_controller" 00:33:00.590 }' 00:33:00.590 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:00.590 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:00.590 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:00.590 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:00.590 "params": { 00:33:00.590 "name": "Nvme1", 00:33:00.590 "trtype": "tcp", 00:33:00.590 "traddr": "10.0.0.2", 00:33:00.590 "adrfam": "ipv4", 00:33:00.590 "trsvcid": "4420", 00:33:00.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:00.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:00.590 "hdgst": false, 00:33:00.590 "ddgst": false 00:33:00.590 }, 00:33:00.590 "method": "bdev_nvme_attach_controller" 00:33:00.590 }' 00:33:00.590 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:00.590 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:00.590 "params": { 00:33:00.590 "name": "Nvme1", 00:33:00.590 "trtype": "tcp", 00:33:00.590 "traddr": "10.0.0.2", 00:33:00.590 "adrfam": "ipv4", 00:33:00.590 "trsvcid": "4420", 00:33:00.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:00.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:00.590 "hdgst": false, 00:33:00.590 "ddgst": false 00:33:00.590 }, 00:33:00.590 "method": "bdev_nvme_attach_controller" 00:33:00.590 }' 00:33:00.590 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:00.590 14:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:00.590 "params": { 00:33:00.590 "name": "Nvme1", 00:33:00.590 "trtype": "tcp", 00:33:00.590 "traddr": "10.0.0.2", 00:33:00.590 "adrfam": "ipv4", 00:33:00.590 "trsvcid": "4420", 00:33:00.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:00.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:00.590 "hdgst": false, 00:33:00.590 "ddgst": false 00:33:00.590 }, 00:33:00.590 "method": "bdev_nvme_attach_controller" 00:33:00.590 }' 00:33:00.590 [2024-11-06 14:14:46.730632] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:33:00.590 [2024-11-06 14:14:46.730697] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:00.590 [2024-11-06 14:14:46.731653] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:33:00.590 [2024-11-06 14:14:46.731718] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:00.590 [2024-11-06 14:14:46.735474] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:33:00.590 [2024-11-06 14:14:46.735544] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:00.590 [2024-11-06 14:14:46.742174] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:33:00.590 [2024-11-06 14:14:46.742263] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:00.852 [2024-11-06 14:14:46.934616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.852 [2024-11-06 14:14:46.974855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:00.852 [2024-11-06 14:14:47.025573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.852 [2024-11-06 14:14:47.067003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:00.852 [2024-11-06 14:14:47.119167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.113 [2024-11-06 14:14:47.157455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:01.113 [2024-11-06 14:14:47.182440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.113 [2024-11-06 14:14:47.220211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:01.113 Running I/O for 1 seconds... 00:33:01.374 Running I/O for 1 seconds... 00:33:01.374 Running I/O for 1 seconds... 00:33:01.374 Running I/O for 1 seconds... 00:33:02.318 8202.00 IOPS, 32.04 MiB/s 00:33:02.318 Latency(us) 00:33:02.318 [2024-11-06T13:14:48.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.318 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:02.318 Nvme1n1 : 1.02 8211.56 32.08 0.00 0.00 15463.18 4942.51 23046.83 00:33:02.318 [2024-11-06T13:14:48.598Z] =================================================================================================================== 00:33:02.318 [2024-11-06T13:14:48.598Z] Total : 8211.56 32.08 0.00 0.00 15463.18 4942.51 23046.83 00:33:02.318 11807.00 IOPS, 46.12 MiB/s [2024-11-06T13:14:48.598Z] 7410.00 IOPS, 28.95 MiB/s 00:33:02.318 Latency(us) 00:33:02.318 [2024-11-06T13:14:48.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.318 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:02.318 Nvme1n1 : 1.01 11867.44 46.36 0.00 0.00 10747.22 2457.60 17585.49 00:33:02.318 [2024-11-06T13:14:48.598Z] =================================================================================================================== 00:33:02.318 [2024-11-06T13:14:48.598Z] Total : 11867.44 46.36 0.00 0.00 10747.22 2457.60 17585.49 00:33:02.318 00:33:02.318 Latency(us) 00:33:02.318 [2024-11-06T13:14:48.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.318 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:02.318 Nvme1n1 : 1.01 7498.80 29.29 0.00 0.00 17022.19 4150.61 32986.45 00:33:02.318 [2024-11-06T13:14:48.598Z] =================================================================================================================== 00:33:02.318 [2024-11-06T13:14:48.598Z] Total : 7498.80 29.29 0.00 0.00 17022.19 4150.61 32986.45 00:33:02.318 188800.00 IOPS, 737.50 MiB/s 00:33:02.318 Latency(us) 00:33:02.318 [2024-11-06T13:14:48.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.318 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:02.318 Nvme1n1 : 1.00 188425.12 736.04 0.00 0.00 675.44 303.79 1979.73 00:33:02.318 [2024-11-06T13:14:48.598Z] =================================================================================================================== 00:33:02.318 [2024-11-06T13:14:48.598Z] Total : 188425.12 736.04 0.00 0.00 675.44 303.79 1979.73 00:33:02.318 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2652994 00:33:02.318 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2652996 00:33:02.318 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2652999 00:33:02.318 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:02.318 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.318 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:02.579 rmmod nvme_tcp 00:33:02.579 rmmod nvme_fabrics 00:33:02.579 rmmod nvme_keyring 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2652690 ']' 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2652690 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 2652690 ']' 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 2652690 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2652690 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2652690' 00:33:02.579 killing process with pid 2652690 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 2652690 00:33:02.579 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 2652690 00:33:02.839 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:02.839 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:02.839 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:02.839 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:02.839 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:02.839 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:02.839 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:02.839 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:02.839 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:02.839 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.839 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:02.839 14:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.751 14:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:04.751 00:33:04.751 real 0m13.245s 00:33:04.751 user 0m16.362s 00:33:04.751 sys 0m7.616s 00:33:04.751 14:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:04.751 14:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:04.751 ************************************ 00:33:04.751 END TEST nvmf_bdev_io_wait 00:33:04.751 ************************************ 00:33:05.012 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:05.012 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:05.012 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:05.012 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:05.012 ************************************ 00:33:05.012 START TEST nvmf_queue_depth 00:33:05.012 ************************************ 00:33:05.012 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:05.012 * Looking for test storage... 00:33:05.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:05.012 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:05.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.013 --rc genhtml_branch_coverage=1 00:33:05.013 --rc genhtml_function_coverage=1 00:33:05.013 --rc genhtml_legend=1 00:33:05.013 --rc geninfo_all_blocks=1 00:33:05.013 --rc geninfo_unexecuted_blocks=1 00:33:05.013 00:33:05.013 ' 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:05.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.013 --rc genhtml_branch_coverage=1 00:33:05.013 --rc genhtml_function_coverage=1 00:33:05.013 --rc genhtml_legend=1 00:33:05.013 --rc geninfo_all_blocks=1 00:33:05.013 --rc geninfo_unexecuted_blocks=1 00:33:05.013 00:33:05.013 ' 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:05.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.013 --rc genhtml_branch_coverage=1 00:33:05.013 --rc genhtml_function_coverage=1 00:33:05.013 --rc genhtml_legend=1 00:33:05.013 --rc geninfo_all_blocks=1 00:33:05.013 --rc geninfo_unexecuted_blocks=1 00:33:05.013 00:33:05.013 ' 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:05.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.013 --rc genhtml_branch_coverage=1 00:33:05.013 --rc genhtml_function_coverage=1 00:33:05.013 --rc genhtml_legend=1 00:33:05.013 --rc geninfo_all_blocks=1 00:33:05.013 --rc geninfo_unexecuted_blocks=1 00:33:05.013 00:33:05.013 ' 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:05.013 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:05.274 14:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:13.406 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:13.406 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:13.406 Found net devices under 0000:31:00.0: cvl_0_0 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:13.406 Found net devices under 0000:31:00.1: cvl_0_1 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:13.406 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:13.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:13.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:33:13.407 00:33:13.407 --- 10.0.0.2 ping statistics --- 00:33:13.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.407 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:13.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:13.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:33:13.407 00:33:13.407 --- 10.0.0.1 ping statistics --- 00:33:13.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.407 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2657574 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2657574 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2657574 ']' 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:13.407 14:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.407 [2024-11-06 14:14:58.819390] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:13.407 [2024-11-06 14:14:58.820393] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:33:13.407 [2024-11-06 14:14:58.820432] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:13.407 [2024-11-06 14:14:58.915913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.407 [2024-11-06 14:14:58.951195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:13.407 [2024-11-06 14:14:58.951225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:13.407 [2024-11-06 14:14:58.951233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:13.407 [2024-11-06 14:14:58.951240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:13.407 [2024-11-06 14:14:58.951246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:13.407 [2024-11-06 14:14:58.951806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.407 [2024-11-06 14:14:59.007494] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:13.407 [2024-11-06 14:14:59.007753] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:13.407 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:13.407 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:33:13.407 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:13.407 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:13.407 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.407 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:13.407 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:13.407 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.407 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.407 [2024-11-06 14:14:59.660564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.407 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.407 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:13.407 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.407 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.668 Malloc0 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.668 [2024-11-06 14:14:59.740787] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2657743 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2657743 /var/tmp/bdevperf.sock 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2657743 ']' 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:13.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:13.668 14:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.668 [2024-11-06 14:14:59.800433] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:33:13.668 [2024-11-06 14:14:59.800509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2657743 ] 00:33:13.668 [2024-11-06 14:14:59.893725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.929 [2024-11-06 14:14:59.946536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.500 14:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:14.500 14:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:33:14.500 14:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:14.500 14:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.500 14:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:14.760 NVMe0n1 00:33:14.760 14:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.760 14:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:14.760 Running I/O for 10 seconds... 00:33:17.085 8369.00 IOPS, 32.69 MiB/s [2024-11-06T13:15:03.935Z] 8712.00 IOPS, 34.03 MiB/s [2024-11-06T13:15:05.317Z] 9944.00 IOPS, 38.84 MiB/s [2024-11-06T13:15:06.258Z] 10878.25 IOPS, 42.49 MiB/s [2024-11-06T13:15:07.198Z] 11456.80 IOPS, 44.75 MiB/s [2024-11-06T13:15:08.138Z] 11781.33 IOPS, 46.02 MiB/s [2024-11-06T13:15:09.077Z] 12063.14 IOPS, 47.12 MiB/s [2024-11-06T13:15:10.018Z] 12275.62 IOPS, 47.95 MiB/s [2024-11-06T13:15:10.958Z] 12427.56 IOPS, 48.55 MiB/s [2024-11-06T13:15:11.218Z] 12593.50 IOPS, 49.19 MiB/s 00:33:24.938 Latency(us) 00:33:24.938 [2024-11-06T13:15:11.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.938 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:24.938 Verification LBA range: start 0x0 length 0x4000 00:33:24.938 NVMe0n1 : 10.06 12613.62 49.27 0.00 0.00 80892.46 24794.45 73837.23 00:33:24.938 [2024-11-06T13:15:11.218Z] =================================================================================================================== 00:33:24.938 [2024-11-06T13:15:11.218Z] Total : 12613.62 49.27 0.00 0.00 80892.46 24794.45 73837.23 00:33:24.938 { 00:33:24.938 "results": [ 00:33:24.938 { 00:33:24.938 "job": "NVMe0n1", 00:33:24.938 "core_mask": "0x1", 00:33:24.938 "workload": "verify", 00:33:24.938 "status": "finished", 00:33:24.938 "verify_range": { 00:33:24.938 "start": 0, 00:33:24.938 "length": 16384 00:33:24.938 }, 00:33:24.938 "queue_depth": 1024, 00:33:24.938 "io_size": 4096, 00:33:24.938 "runtime": 10.058969, 00:33:24.938 "iops": 12613.618751583786, 00:33:24.938 "mibps": 49.271948248374166, 00:33:24.938 "io_failed": 0, 00:33:24.938 "io_timeout": 0, 00:33:24.938 "avg_latency_us": 80892.46270197562, 00:33:24.938 "min_latency_us": 24794.453333333335, 00:33:24.938 "max_latency_us": 73837.22666666667 00:33:24.938 } 00:33:24.938 ], 00:33:24.938 "core_count": 1 00:33:24.938 } 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2657743 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2657743 ']' 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2657743 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2657743 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2657743' 00:33:24.938 killing process with pid 2657743 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2657743 00:33:24.938 Received shutdown signal, test time was about 10.000000 seconds 00:33:24.938 00:33:24.938 Latency(us) 00:33:24.938 [2024-11-06T13:15:11.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.938 [2024-11-06T13:15:11.218Z] =================================================================================================================== 00:33:24.938 [2024-11-06T13:15:11.218Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2657743 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:24.938 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:24.938 rmmod nvme_tcp 00:33:25.198 rmmod nvme_fabrics 00:33:25.198 rmmod nvme_keyring 00:33:25.198 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:25.198 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:25.198 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:25.198 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2657574 ']' 00:33:25.198 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2657574 00:33:25.198 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2657574 ']' 00:33:25.198 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2657574 00:33:25.198 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:33:25.198 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:25.199 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2657574 00:33:25.199 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:25.199 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:25.199 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2657574' 00:33:25.199 killing process with pid 2657574 00:33:25.199 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2657574 00:33:25.199 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2657574 00:33:25.199 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:25.199 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:25.199 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:25.199 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:25.199 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:25.199 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:25.199 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:25.199 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:25.199 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:25.199 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.199 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:25.199 14:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.751 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:27.751 00:33:27.751 real 0m22.461s 00:33:27.751 user 0m24.660s 00:33:27.751 sys 0m7.429s 00:33:27.751 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:27.751 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:27.751 ************************************ 00:33:27.751 END TEST nvmf_queue_depth 00:33:27.751 ************************************ 00:33:27.751 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:27.751 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:27.751 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:27.751 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:27.751 ************************************ 00:33:27.752 START TEST nvmf_target_multipath 00:33:27.752 ************************************ 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:27.752 * Looking for test storage... 00:33:27.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:27.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.752 --rc genhtml_branch_coverage=1 00:33:27.752 --rc genhtml_function_coverage=1 00:33:27.752 --rc genhtml_legend=1 00:33:27.752 --rc geninfo_all_blocks=1 00:33:27.752 --rc geninfo_unexecuted_blocks=1 00:33:27.752 00:33:27.752 ' 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:27.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.752 --rc genhtml_branch_coverage=1 00:33:27.752 --rc genhtml_function_coverage=1 00:33:27.752 --rc genhtml_legend=1 00:33:27.752 --rc geninfo_all_blocks=1 00:33:27.752 --rc geninfo_unexecuted_blocks=1 00:33:27.752 00:33:27.752 ' 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:27.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.752 --rc genhtml_branch_coverage=1 00:33:27.752 --rc genhtml_function_coverage=1 00:33:27.752 --rc genhtml_legend=1 00:33:27.752 --rc geninfo_all_blocks=1 00:33:27.752 --rc geninfo_unexecuted_blocks=1 00:33:27.752 00:33:27.752 ' 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:27.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.752 --rc genhtml_branch_coverage=1 00:33:27.752 --rc genhtml_function_coverage=1 00:33:27.752 --rc genhtml_legend=1 00:33:27.752 --rc geninfo_all_blocks=1 00:33:27.752 --rc geninfo_unexecuted_blocks=1 00:33:27.752 00:33:27.752 ' 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.752 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:27.753 14:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:35.897 14:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:35.897 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:35.897 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:35.897 Found net devices under 0000:31:00.0: cvl_0_0 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.897 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:35.898 Found net devices under 0000:31:00.1: cvl_0_1 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:35.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:35.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:33:35.898 00:33:35.898 --- 10.0.0.2 ping statistics --- 00:33:35.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:35.898 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:35.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:35.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:33:35.898 00:33:35.898 --- 10.0.0.1 ping statistics --- 00:33:35.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:35.898 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:35.898 only one NIC for nvmf test 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:35.898 rmmod nvme_tcp 00:33:35.898 rmmod nvme_fabrics 00:33:35.898 rmmod nvme_keyring 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:35.898 14:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:37.303 00:33:37.303 real 0m9.947s 00:33:37.303 user 0m2.132s 00:33:37.303 sys 0m5.742s 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:37.303 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:37.303 ************************************ 00:33:37.303 END TEST nvmf_target_multipath 00:33:37.303 ************************************ 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:37.604 ************************************ 00:33:37.604 START TEST nvmf_zcopy 00:33:37.604 ************************************ 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:37.604 * Looking for test storage... 00:33:37.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:37.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.604 --rc genhtml_branch_coverage=1 00:33:37.604 --rc genhtml_function_coverage=1 00:33:37.604 --rc genhtml_legend=1 00:33:37.604 --rc geninfo_all_blocks=1 00:33:37.604 --rc geninfo_unexecuted_blocks=1 00:33:37.604 00:33:37.604 ' 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:37.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.604 --rc genhtml_branch_coverage=1 00:33:37.604 --rc genhtml_function_coverage=1 00:33:37.604 --rc genhtml_legend=1 00:33:37.604 --rc geninfo_all_blocks=1 00:33:37.604 --rc geninfo_unexecuted_blocks=1 00:33:37.604 00:33:37.604 ' 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:37.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.604 --rc genhtml_branch_coverage=1 00:33:37.604 --rc genhtml_function_coverage=1 00:33:37.604 --rc genhtml_legend=1 00:33:37.604 --rc geninfo_all_blocks=1 00:33:37.604 --rc geninfo_unexecuted_blocks=1 00:33:37.604 00:33:37.604 ' 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:37.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.604 --rc genhtml_branch_coverage=1 00:33:37.604 --rc genhtml_function_coverage=1 00:33:37.604 --rc genhtml_legend=1 00:33:37.604 --rc geninfo_all_blocks=1 00:33:37.604 --rc geninfo_unexecuted_blocks=1 00:33:37.604 00:33:37.604 ' 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:37.604 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:37.876 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:37.877 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:37.877 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:37.877 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.877 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:37.877 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.877 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:37.877 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:37.877 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:37.877 14:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:46.019 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:46.019 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.019 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:46.020 Found net devices under 0000:31:00.0: cvl_0_0 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:46.020 Found net devices under 0000:31:00.1: cvl_0_1 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:46.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:33:46.020 00:33:46.020 --- 10.0.0.2 ping statistics --- 00:33:46.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.020 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:33:46.020 00:33:46.020 --- 10.0.0.1 ping statistics --- 00:33:46.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.020 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2668785 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2668785 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 2668785 ']' 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:46.020 14:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.020 [2024-11-06 14:15:31.530410] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:46.020 [2024-11-06 14:15:31.531601] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:33:46.021 [2024-11-06 14:15:31.531656] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:46.021 [2024-11-06 14:15:31.631989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.021 [2024-11-06 14:15:31.682296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:46.021 [2024-11-06 14:15:31.682343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:46.021 [2024-11-06 14:15:31.682352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:46.021 [2024-11-06 14:15:31.682359] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:46.021 [2024-11-06 14:15:31.682366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:46.021 [2024-11-06 14:15:31.683143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.021 [2024-11-06 14:15:31.762031] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:46.021 [2024-11-06 14:15:31.762316] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.282 [2024-11-06 14:15:32.396000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.282 [2024-11-06 14:15:32.424314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.282 malloc0 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:46.282 { 00:33:46.282 "params": { 00:33:46.282 "name": "Nvme$subsystem", 00:33:46.282 "trtype": "$TEST_TRANSPORT", 00:33:46.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:46.282 "adrfam": "ipv4", 00:33:46.282 "trsvcid": "$NVMF_PORT", 00:33:46.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:46.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:46.282 "hdgst": ${hdgst:-false}, 00:33:46.282 "ddgst": ${ddgst:-false} 00:33:46.282 }, 00:33:46.282 "method": "bdev_nvme_attach_controller" 00:33:46.282 } 00:33:46.282 EOF 00:33:46.282 )") 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:46.282 14:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:46.282 "params": { 00:33:46.282 "name": "Nvme1", 00:33:46.282 "trtype": "tcp", 00:33:46.282 "traddr": "10.0.0.2", 00:33:46.282 "adrfam": "ipv4", 00:33:46.282 "trsvcid": "4420", 00:33:46.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:46.282 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:46.282 "hdgst": false, 00:33:46.282 "ddgst": false 00:33:46.282 }, 00:33:46.282 "method": "bdev_nvme_attach_controller" 00:33:46.282 }' 00:33:46.282 [2024-11-06 14:15:32.526443] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:33:46.282 [2024-11-06 14:15:32.526504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669079 ] 00:33:46.542 [2024-11-06 14:15:32.619740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.542 [2024-11-06 14:15:32.673874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.803 Running I/O for 10 seconds... 00:33:49.131 6385.00 IOPS, 49.88 MiB/s [2024-11-06T13:15:35.982Z] 6434.00 IOPS, 50.27 MiB/s [2024-11-06T13:15:37.366Z] 6452.67 IOPS, 50.41 MiB/s [2024-11-06T13:15:38.307Z] 6673.25 IOPS, 52.13 MiB/s [2024-11-06T13:15:39.248Z] 7266.00 IOPS, 56.77 MiB/s [2024-11-06T13:15:40.187Z] 7658.33 IOPS, 59.83 MiB/s [2024-11-06T13:15:41.128Z] 7944.00 IOPS, 62.06 MiB/s [2024-11-06T13:15:42.069Z] 8156.62 IOPS, 63.72 MiB/s [2024-11-06T13:15:43.011Z] 8323.22 IOPS, 65.03 MiB/s [2024-11-06T13:15:43.011Z] 8451.40 IOPS, 66.03 MiB/s 00:33:56.731 Latency(us) 00:33:56.731 [2024-11-06T13:15:43.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.731 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:56.731 Verification LBA range: start 0x0 length 0x1000 00:33:56.731 Nvme1n1 : 10.01 8454.48 66.05 0.00 0.00 15094.76 1774.93 27525.12 00:33:56.731 [2024-11-06T13:15:43.011Z] =================================================================================================================== 00:33:56.731 [2024-11-06T13:15:43.011Z] Total : 8454.48 66.05 0.00 0.00 15094.76 1774.93 27525.12 00:33:56.991 14:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2671078 00:33:56.991 14:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:56.991 14:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:56.991 14:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:56.991 14:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:56.991 14:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:56.991 14:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:56.991 14:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:56.991 14:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:56.991 { 00:33:56.991 "params": { 00:33:56.991 "name": "Nvme$subsystem", 00:33:56.991 "trtype": "$TEST_TRANSPORT", 00:33:56.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:56.991 "adrfam": "ipv4", 00:33:56.991 "trsvcid": "$NVMF_PORT", 00:33:56.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:56.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:56.991 "hdgst": ${hdgst:-false}, 00:33:56.991 "ddgst": ${ddgst:-false} 00:33:56.991 }, 00:33:56.991 "method": "bdev_nvme_attach_controller" 00:33:56.991 } 00:33:56.991 EOF 00:33:56.991 )") 00:33:56.991 14:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:56.991 [2024-11-06 14:15:43.107540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.991 [2024-11-06 14:15:43.107568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.991 14:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:56.991 14:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:56.991 14:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:56.991 "params": { 00:33:56.991 "name": "Nvme1", 00:33:56.991 "trtype": "tcp", 00:33:56.991 "traddr": "10.0.0.2", 00:33:56.991 "adrfam": "ipv4", 00:33:56.991 "trsvcid": "4420", 00:33:56.991 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:56.991 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:56.991 "hdgst": false, 00:33:56.991 "ddgst": false 00:33:56.991 }, 00:33:56.991 "method": "bdev_nvme_attach_controller" 00:33:56.991 }' 00:33:56.991 [2024-11-06 14:15:43.119513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.991 [2024-11-06 14:15:43.119521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.991 [2024-11-06 14:15:43.131512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.991 [2024-11-06 14:15:43.131519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.991 [2024-11-06 14:15:43.143511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.991 [2024-11-06 14:15:43.143518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.991 [2024-11-06 14:15:43.150982] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:33:56.991 [2024-11-06 14:15:43.151029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671078 ] 00:33:56.992 [2024-11-06 14:15:43.155511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.992 [2024-11-06 14:15:43.155517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.992 [2024-11-06 14:15:43.167511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.992 [2024-11-06 14:15:43.167520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.992 [2024-11-06 14:15:43.179511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.992 [2024-11-06 14:15:43.179518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.992 [2024-11-06 14:15:43.191511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.992 [2024-11-06 14:15:43.191517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.992 [2024-11-06 14:15:43.203511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.992 [2024-11-06 14:15:43.203518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.992 [2024-11-06 14:15:43.215511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.992 [2024-11-06 14:15:43.215518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.992 [2024-11-06 14:15:43.227512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.992 [2024-11-06 14:15:43.227519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.992 [2024-11-06 14:15:43.233525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.992 [2024-11-06 14:15:43.239511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.992 [2024-11-06 14:15:43.239519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.992 [2024-11-06 14:15:43.251511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.992 [2024-11-06 14:15:43.251519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.992 [2024-11-06 14:15:43.262990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.992 [2024-11-06 14:15:43.263512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.992 [2024-11-06 14:15:43.263520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.275514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.275522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.287516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.287528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.299513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.299524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.311512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.311522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.323511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.323518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.335519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.335534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.347514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.347522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.359513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.359523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.371513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.371522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.383511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.383518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.395511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.395518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.407510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.407517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.419512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.419521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.431511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.431518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.443511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.443518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.455511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.455518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.467512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.467520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.479511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.479517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.491511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.491518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.503511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.503518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 [2024-11-06 14:15:43.515516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.252 [2024-11-06 14:15:43.515529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.252 Running I/O for 5 seconds... 00:33:57.514 [2024-11-06 14:15:43.530427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.530445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.514 [2024-11-06 14:15:43.544779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.544795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.514 [2024-11-06 14:15:43.558545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.558560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.514 [2024-11-06 14:15:43.572757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.572775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.514 [2024-11-06 14:15:43.587185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.587203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.514 [2024-11-06 14:15:43.601094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.601111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.514 [2024-11-06 14:15:43.615269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.615285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.514 [2024-11-06 14:15:43.629068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.629084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.514 [2024-11-06 14:15:43.642896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.642912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.514 [2024-11-06 14:15:43.657115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.657130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.514 [2024-11-06 14:15:43.670884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.670899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.514 [2024-11-06 14:15:43.684063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.684079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.514 [2024-11-06 14:15:43.699036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.699052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.514 [2024-11-06 14:15:43.712839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.712855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.514 [2024-11-06 14:15:43.727611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.727626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.514 [2024-11-06 14:15:43.740491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.740508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.514 [2024-11-06 14:15:43.755420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.755435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.514 [2024-11-06 14:15:43.768209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.768224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.514 [2024-11-06 14:15:43.783080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.514 [2024-11-06 14:15:43.783101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:43.796797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:43.796814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:43.810559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:43.810575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:43.824016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:43.824031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:43.838759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:43.838774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:43.852498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:43.852513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:43.867154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:43.867170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:43.880582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:43.880597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:43.894717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:43.894732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:43.908935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:43.908950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:43.922662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:43.922677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:43.936434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:43.936449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:43.951039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:43.951053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:43.964483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:43.964498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:43.979183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:43.979198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:43.993080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:43.993095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:44.006555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:44.006570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:44.020415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:44.020430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:44.034837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:44.034852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.775 [2024-11-06 14:15:44.048497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.775 [2024-11-06 14:15:44.048518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.036 [2024-11-06 14:15:44.062924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.036 [2024-11-06 14:15:44.062940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.036 [2024-11-06 14:15:44.076960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.036 [2024-11-06 14:15:44.076976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.036 [2024-11-06 14:15:44.091887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.036 [2024-11-06 14:15:44.091903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.036 [2024-11-06 14:15:44.107270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.036 [2024-11-06 14:15:44.107286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.036 [2024-11-06 14:15:44.121286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.036 [2024-11-06 14:15:44.121302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.036 [2024-11-06 14:15:44.135089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.036 [2024-11-06 14:15:44.135104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.036 [2024-11-06 14:15:44.148888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.036 [2024-11-06 14:15:44.148903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.036 [2024-11-06 14:15:44.163200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.036 [2024-11-06 14:15:44.163216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.036 [2024-11-06 14:15:44.177165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.036 [2024-11-06 14:15:44.177180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.036 [2024-11-06 14:15:44.190652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.036 [2024-11-06 14:15:44.190668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.036 [2024-11-06 14:15:44.204342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.036 [2024-11-06 14:15:44.204357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.036 [2024-11-06 14:15:44.219159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.036 [2024-11-06 14:15:44.219175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.036 [2024-11-06 14:15:44.233089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.036 [2024-11-06 14:15:44.233105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.036 [2024-11-06 14:15:44.247765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.036 [2024-11-06 14:15:44.247780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.036 [2024-11-06 14:15:44.260665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.036 [2024-11-06 14:15:44.260680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.036 [2024-11-06 14:15:44.275519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.036 [2024-11-06 14:15:44.275534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.036 [2024-11-06 14:15:44.288931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.036 [2024-11-06 14:15:44.288946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.036 [2024-11-06 14:15:44.303488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.036 [2024-11-06 14:15:44.303503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 [2024-11-06 14:15:44.317161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.317187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 [2024-11-06 14:15:44.331405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.331420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 [2024-11-06 14:15:44.343538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.343554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 [2024-11-06 14:15:44.356832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.356846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 [2024-11-06 14:15:44.370994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.371010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 [2024-11-06 14:15:44.384738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.384757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 [2024-11-06 14:15:44.399153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.399169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 [2024-11-06 14:15:44.413088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.413103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 [2024-11-06 14:15:44.427030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.427045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 [2024-11-06 14:15:44.440997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.441012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 [2024-11-06 14:15:44.455001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.455017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 [2024-11-06 14:15:44.468869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.468884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 [2024-11-06 14:15:44.483682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.483697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 [2024-11-06 14:15:44.496138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.496153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 [2024-11-06 14:15:44.510839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.510854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 [2024-11-06 14:15:44.524799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.524814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 17930.00 IOPS, 140.08 MiB/s [2024-11-06T13:15:44.578Z] [2024-11-06 14:15:44.538985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.539000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 [2024-11-06 14:15:44.552412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.552427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.298 [2024-11-06 14:15:44.567403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.298 [2024-11-06 14:15:44.567419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.580909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.580925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.595486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.595500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.607989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.608003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.623432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.623448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.636776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.636791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.651272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.651287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.665143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.665158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.678913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.678928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.692528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.692544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.707049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.707065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.720827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.720843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.734724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.734740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.748315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.748330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.763329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.763344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.776863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.776879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.791536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.791552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.803093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.803108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.817082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.817098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.560 [2024-11-06 14:15:44.831045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.560 [2024-11-06 14:15:44.831060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:44.844912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:44.844928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:44.859523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:44.859538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:44.873411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:44.873425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:44.886983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:44.886998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:44.900684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:44.900699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:44.915312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:44.915327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:44.928628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:44.928643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:44.943399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:44.943415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:44.957104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:44.957119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:44.971602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:44.971617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:44.984739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:44.984758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:44.999358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:44.999373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:45.012914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:45.012929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:45.027727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:45.027742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:45.040877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:45.040893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:45.054947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:45.054962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:45.068592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:45.068607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:45.082969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:45.082984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.821 [2024-11-06 14:15:45.097075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.821 [2024-11-06 14:15:45.097089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.082 [2024-11-06 14:15:45.110679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.082 [2024-11-06 14:15:45.110696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.082 [2024-11-06 14:15:45.124691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.082 [2024-11-06 14:15:45.124707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.082 [2024-11-06 14:15:45.138935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.082 [2024-11-06 14:15:45.138950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.082 [2024-11-06 14:15:45.152448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.082 [2024-11-06 14:15:45.152463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.082 [2024-11-06 14:15:45.167196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.082 [2024-11-06 14:15:45.167212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.082 [2024-11-06 14:15:45.180917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.082 [2024-11-06 14:15:45.180933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.082 [2024-11-06 14:15:45.195617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.082 [2024-11-06 14:15:45.195632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.082 [2024-11-06 14:15:45.209050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.082 [2024-11-06 14:15:45.209065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.082 [2024-11-06 14:15:45.223328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.082 [2024-11-06 14:15:45.223344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.082 [2024-11-06 14:15:45.237201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.082 [2024-11-06 14:15:45.237217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.082 [2024-11-06 14:15:45.251741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.082 [2024-11-06 14:15:45.251763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.082 [2024-11-06 14:15:45.264824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.082 [2024-11-06 14:15:45.264838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.082 [2024-11-06 14:15:45.279521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.082 [2024-11-06 14:15:45.279536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.082 [2024-11-06 14:15:45.292954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.082 [2024-11-06 14:15:45.292970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.082 [2024-11-06 14:15:45.307408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.082 [2024-11-06 14:15:45.307423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.082 [2024-11-06 14:15:45.320827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.083 [2024-11-06 14:15:45.320841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.083 [2024-11-06 14:15:45.334966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.083 [2024-11-06 14:15:45.334982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.083 [2024-11-06 14:15:45.348569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.083 [2024-11-06 14:15:45.348584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 [2024-11-06 14:15:45.363258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.363280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 [2024-11-06 14:15:45.376915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.376930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 [2024-11-06 14:15:45.391214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.391229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 [2024-11-06 14:15:45.404844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.404858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 [2024-11-06 14:15:45.419213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.419228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 [2024-11-06 14:15:45.433308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.433322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 [2024-11-06 14:15:45.447889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.447903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 [2024-11-06 14:15:45.463538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.463553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 [2024-11-06 14:15:45.476902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.476917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 [2024-11-06 14:15:45.491221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.491237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 [2024-11-06 14:15:45.504608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.504623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 [2024-11-06 14:15:45.519201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.519216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 18040.00 IOPS, 140.94 MiB/s [2024-11-06T13:15:45.624Z] [2024-11-06 14:15:45.533067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.533082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 [2024-11-06 14:15:45.547315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.547330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 [2024-11-06 14:15:45.561089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.561104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 [2024-11-06 14:15:45.575064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.575078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 [2024-11-06 14:15:45.589307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.589322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 [2024-11-06 14:15:45.603859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.603881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.344 [2024-11-06 14:15:45.618948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.344 [2024-11-06 14:15:45.618963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.605 [2024-11-06 14:15:45.632690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.605 [2024-11-06 14:15:45.632711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.605 [2024-11-06 14:15:45.647545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.605 [2024-11-06 14:15:45.647560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.605 [2024-11-06 14:15:45.660135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.605 [2024-11-06 14:15:45.660149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.605 [2024-11-06 14:15:45.674487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.605 [2024-11-06 14:15:45.674501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.605 [2024-11-06 14:15:45.688096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.605 [2024-11-06 14:15:45.688111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.605 [2024-11-06 14:15:45.702864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.605 [2024-11-06 14:15:45.702880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.605 [2024-11-06 14:15:45.716526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.605 [2024-11-06 14:15:45.716541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.605 [2024-11-06 14:15:45.730996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.605 [2024-11-06 14:15:45.731012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.605 [2024-11-06 14:15:45.744908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.605 [2024-11-06 14:15:45.744923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.605 [2024-11-06 14:15:45.759387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.605 [2024-11-06 14:15:45.759402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.605 [2024-11-06 14:15:45.772198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.605 [2024-11-06 14:15:45.772212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.605 [2024-11-06 14:15:45.786832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.605 [2024-11-06 14:15:45.786846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.605 [2024-11-06 14:15:45.800732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.605 [2024-11-06 14:15:45.800752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.605 [2024-11-06 14:15:45.814851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.605 [2024-11-06 14:15:45.814866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.605 [2024-11-06 14:15:45.828446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.605 [2024-11-06 14:15:45.828461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.605 [2024-11-06 14:15:45.843138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.605 [2024-11-06 14:15:45.843153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.605 [2024-11-06 14:15:45.856759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.605 [2024-11-06 14:15:45.856774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.605 [2024-11-06 14:15:45.871353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.605 [2024-11-06 14:15:45.871368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:45.884423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:45.884438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:45.898734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:45.898764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:45.912180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:45.912195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:45.926769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:45.926785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:45.940811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:45.940827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:45.955654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:45.955669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:45.968290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:45.968305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:45.982894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:45.982909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:45.996881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:45.996896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:46.011868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:46.011882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:46.026844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:46.026859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:46.040300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:46.040314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:46.054899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:46.054915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:46.068657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:46.068672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:46.083167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:46.083182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:46.096984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:46.096999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:46.111015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:46.111033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:46.125006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:46.125022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.866 [2024-11-06 14:15:46.138986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.866 [2024-11-06 14:15:46.139001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.152743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.152764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.167504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.167521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.180502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.180518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.195019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.195035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.208679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.208694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.223017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.223032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.237023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.237038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.250594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.250610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.264304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.264319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.279147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.279162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.292802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.292817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.307153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.307169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.321179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.321194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.335564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.335579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.348138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.348153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.361343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.361358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.375280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.375295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.388696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.388712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.128 [2024-11-06 14:15:46.403460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.128 [2024-11-06 14:15:46.403475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.389 [2024-11-06 14:15:46.415989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.389 [2024-11-06 14:15:46.416005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.389 [2024-11-06 14:15:46.431179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.390 [2024-11-06 14:15:46.431195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.390 [2024-11-06 14:15:46.445103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.390 [2024-11-06 14:15:46.445119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.390 [2024-11-06 14:15:46.459523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.390 [2024-11-06 14:15:46.459539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.390 [2024-11-06 14:15:46.472703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.390 [2024-11-06 14:15:46.472719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.390 [2024-11-06 14:15:46.487351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.390 [2024-11-06 14:15:46.487367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.390 [2024-11-06 14:15:46.500602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.390 [2024-11-06 14:15:46.500617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.390 [2024-11-06 14:15:46.514997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.390 [2024-11-06 14:15:46.515012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.390 [2024-11-06 14:15:46.528644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.390 [2024-11-06 14:15:46.528660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.390 18062.33 IOPS, 141.11 MiB/s [2024-11-06T13:15:46.670Z] [2024-11-06 14:15:46.543358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.390 [2024-11-06 14:15:46.543373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.390 [2024-11-06 14:15:46.556650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.390 [2024-11-06 14:15:46.556665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.390 [2024-11-06 14:15:46.571380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.390 [2024-11-06 14:15:46.571395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.390 [2024-11-06 14:15:46.585246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.390 [2024-11-06 14:15:46.585260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.390 [2024-11-06 14:15:46.599359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.390 [2024-11-06 14:15:46.599375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.390 [2024-11-06 14:15:46.613336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.390 [2024-11-06 14:15:46.613351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.390 [2024-11-06 14:15:46.627073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.390 [2024-11-06 14:15:46.627089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.390 [2024-11-06 14:15:46.641234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.390 [2024-11-06 14:15:46.641249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.390 [2024-11-06 14:15:46.654852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.390 [2024-11-06 14:15:46.654868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.668561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.668576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.682968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.682989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.696786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.696801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.711208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.711223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.724970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.724985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.739374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.739390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.752933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.752947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.766665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.766680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.780147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.780162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.795033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.795048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.809187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.809201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.823787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.823801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.835952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.835966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.851137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.851152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.864761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.864776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.879403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.879417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.893038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.893053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.906970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.906985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.651 [2024-11-06 14:15:46.920845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.651 [2024-11-06 14:15:46.920860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:46.935693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:46.935708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:46.948063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:46.948085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:46.963687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:46.963701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:46.976414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:46.976429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:46.990624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:46.990639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:47.004403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:47.004417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:47.019294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:47.019309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:47.033310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:47.033324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:47.046895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:47.046910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:47.060605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:47.060619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:47.075426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:47.075441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:47.089301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:47.089315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:47.103108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:47.103123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:47.116988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:47.117003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:47.131339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:47.131354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:47.145137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:47.145152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:47.159231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:47.159246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:47.172845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:47.172860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.913 [2024-11-06 14:15:47.186929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.913 [2024-11-06 14:15:47.186943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.200865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.200880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.215240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.215261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.228534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.228548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.242931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.242946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.256754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.256768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.271164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.271180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.284807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.284822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.299100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.299115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.312511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.312525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.326910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.326924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.340638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.340652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.355051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.355066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.368790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.368805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.382805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.382820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.396281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.396295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.410370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.410384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.424346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.424361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.439555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.439570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.175 [2024-11-06 14:15:47.452446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.175 [2024-11-06 14:15:47.452461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.436 [2024-11-06 14:15:47.466770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.436 [2024-11-06 14:15:47.466786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.436 [2024-11-06 14:15:47.480831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.436 [2024-11-06 14:15:47.480853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.436 [2024-11-06 14:15:47.495480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.436 [2024-11-06 14:15:47.495495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.436 [2024-11-06 14:15:47.508344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.436 [2024-11-06 14:15:47.508358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.436 [2024-11-06 14:15:47.522844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.436 [2024-11-06 14:15:47.522859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.436 18056.25 IOPS, 141.06 MiB/s [2024-11-06T13:15:47.716Z] [2024-11-06 14:15:47.536456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.436 [2024-11-06 14:15:47.536470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.436 [2024-11-06 14:15:47.550938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.436 [2024-11-06 14:15:47.550952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.436 [2024-11-06 14:15:47.564722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.436 [2024-11-06 14:15:47.564737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.436 [2024-11-06 14:15:47.579493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.436 [2024-11-06 14:15:47.579508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.436 [2024-11-06 14:15:47.593196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.436 [2024-11-06 14:15:47.593211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.436 [2024-11-06 14:15:47.606726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.436 [2024-11-06 14:15:47.606741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.436 [2024-11-06 14:15:47.619917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.436 [2024-11-06 14:15:47.619932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.436 [2024-11-06 14:15:47.635277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.436 [2024-11-06 14:15:47.635293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.436 [2024-11-06 14:15:47.649153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.436 [2024-11-06 14:15:47.649168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.436 [2024-11-06 14:15:47.663258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.436 [2024-11-06 14:15:47.663272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.436 [2024-11-06 14:15:47.677300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.436 [2024-11-06 14:15:47.677315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.436 [2024-11-06 14:15:47.691207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.436 [2024-11-06 14:15:47.691223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.436 [2024-11-06 14:15:47.704929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.436 [2024-11-06 14:15:47.704945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.719188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.719204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.733213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.733228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.747787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.747802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.763207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.763223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.777053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.777068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.791625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.791640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.804782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.804797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.819640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.819655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.833083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.833098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.846633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.846648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.860157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.860172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.875455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.875472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.889232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.889248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.903163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.903178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.916783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.916798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.931229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.931244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.944142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.944156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.959243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.959258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.697 [2024-11-06 14:15:47.973222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.697 [2024-11-06 14:15:47.973236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.957 [2024-11-06 14:15:47.987066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.958 [2024-11-06 14:15:47.987080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.958 [2024-11-06 14:15:48.000783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.958 [2024-11-06 14:15:48.000797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.958 [2024-11-06 14:15:48.015274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.958 [2024-11-06 14:15:48.015290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.958 [2024-11-06 14:15:48.028184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.958 [2024-11-06 14:15:48.028199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.958 [2024-11-06 14:15:48.043274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.958 [2024-11-06 14:15:48.043289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.958 [2024-11-06 14:15:48.056575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.958 [2024-11-06 14:15:48.056590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.958 [2024-11-06 14:15:48.071002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.958 [2024-11-06 14:15:48.071017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.958 [2024-11-06 14:15:48.084933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.958 [2024-11-06 14:15:48.084948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.958 [2024-11-06 14:15:48.099242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.958 [2024-11-06 14:15:48.099258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.958 [2024-11-06 14:15:48.113135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.958 [2024-11-06 14:15:48.113151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.958 [2024-11-06 14:15:48.126498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.958 [2024-11-06 14:15:48.126514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.958 [2024-11-06 14:15:48.139777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.958 [2024-11-06 14:15:48.139792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.958 [2024-11-06 14:15:48.152716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.958 [2024-11-06 14:15:48.152731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.958 [2024-11-06 14:15:48.167110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.958 [2024-11-06 14:15:48.167125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.958 [2024-11-06 14:15:48.180883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.958 [2024-11-06 14:15:48.180898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.958 [2024-11-06 14:15:48.194694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.958 [2024-11-06 14:15:48.194709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.958 [2024-11-06 14:15:48.208469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.958 [2024-11-06 14:15:48.208484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.958 [2024-11-06 14:15:48.223085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.958 [2024-11-06 14:15:48.223100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.236920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.236935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.250936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.250952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.264810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.264825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.279611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.279627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.292230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.292244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.305232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.305246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.319501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.319516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.333321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.333337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.347240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.347256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.361164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.361179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.375172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.375187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.388788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.388803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.403422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.403438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.417434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.417449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.431352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.431367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.443992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.444006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.457331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.457345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.471456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.471473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.219 [2024-11-06 14:15:48.484548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.219 [2024-11-06 14:15:48.484564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.480 [2024-11-06 14:15:48.499263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.480 [2024-11-06 14:15:48.499279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.480 [2024-11-06 14:15:48.513147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.480 [2024-11-06 14:15:48.513162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.480 [2024-11-06 14:15:48.527438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.480 [2024-11-06 14:15:48.527464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.480 18056.40 IOPS, 141.07 MiB/s [2024-11-06T13:15:48.760Z] [2024-11-06 14:15:48.539398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.480 [2024-11-06 14:15:48.539413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.480 00:34:02.480 Latency(us) 00:34:02.480 [2024-11-06T13:15:48.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.480 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:02.480 Nvme1n1 : 5.01 18057.85 141.08 0.00 0.00 7081.83 3140.27 12670.29 00:34:02.480 [2024-11-06T13:15:48.760Z] =================================================================================================================== 00:34:02.480 [2024-11-06T13:15:48.760Z] Total : 18057.85 141.08 0.00 0.00 7081.83 3140.27 12670.29 00:34:02.480 [2024-11-06 14:15:48.547516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.480 [2024-11-06 14:15:48.547529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.480 [2024-11-06 14:15:48.559521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.480 [2024-11-06 14:15:48.559534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.480 [2024-11-06 14:15:48.571517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.480 [2024-11-06 14:15:48.571529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.480 [2024-11-06 14:15:48.583517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.480 [2024-11-06 14:15:48.583528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.480 [2024-11-06 14:15:48.595512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.480 [2024-11-06 14:15:48.595524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.480 [2024-11-06 14:15:48.607511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.480 [2024-11-06 14:15:48.607520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.480 [2024-11-06 14:15:48.619512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.480 [2024-11-06 14:15:48.619520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.480 [2024-11-06 14:15:48.631512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.480 [2024-11-06 14:15:48.631521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.480 [2024-11-06 14:15:48.643512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.480 [2024-11-06 14:15:48.643519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2671078) - No such process 00:34:02.480 14:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2671078 00:34:02.480 14:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:02.480 14:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.480 14:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:02.480 14:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.480 14:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:02.481 14:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.481 14:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:02.481 delay0 00:34:02.481 14:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.481 14:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:02.481 14:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.481 14:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:02.481 14:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.481 14:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:02.741 [2024-11-06 14:15:48.851922] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:09.321 [2024-11-06 14:15:55.123399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165abb0 is same with the state(6) to be set 00:34:09.321 [2024-11-06 14:15:55.123435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165abb0 is same with the state(6) to be set 00:34:09.321 [2024-11-06 14:15:55.123440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165abb0 is same with the state(6) to be set 00:34:09.321 Initializing NVMe Controllers 00:34:09.321 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:09.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:09.321 Initialization complete. Launching workers. 00:34:09.321 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 109 00:34:09.321 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 389, failed to submit 40 00:34:09.321 success 188, unsuccessful 201, failed 0 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:09.321 rmmod nvme_tcp 00:34:09.321 rmmod nvme_fabrics 00:34:09.321 rmmod nvme_keyring 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2668785 ']' 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2668785 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 2668785 ']' 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 2668785 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2668785 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2668785' 00:34:09.321 killing process with pid 2668785 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 2668785 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 2668785 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.321 14:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.230 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:11.231 00:34:11.231 real 0m33.824s 00:34:11.231 user 0m42.554s 00:34:11.231 sys 0m12.337s 00:34:11.231 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:11.231 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:11.231 ************************************ 00:34:11.231 END TEST nvmf_zcopy 00:34:11.231 ************************************ 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:11.491 ************************************ 00:34:11.491 START TEST nvmf_nmic 00:34:11.491 ************************************ 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:11.491 * Looking for test storage... 00:34:11.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:11.491 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:11.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.492 --rc genhtml_branch_coverage=1 00:34:11.492 --rc genhtml_function_coverage=1 00:34:11.492 --rc genhtml_legend=1 00:34:11.492 --rc geninfo_all_blocks=1 00:34:11.492 --rc geninfo_unexecuted_blocks=1 00:34:11.492 00:34:11.492 ' 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:11.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.492 --rc genhtml_branch_coverage=1 00:34:11.492 --rc genhtml_function_coverage=1 00:34:11.492 --rc genhtml_legend=1 00:34:11.492 --rc geninfo_all_blocks=1 00:34:11.492 --rc geninfo_unexecuted_blocks=1 00:34:11.492 00:34:11.492 ' 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:11.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.492 --rc genhtml_branch_coverage=1 00:34:11.492 --rc genhtml_function_coverage=1 00:34:11.492 --rc genhtml_legend=1 00:34:11.492 --rc geninfo_all_blocks=1 00:34:11.492 --rc geninfo_unexecuted_blocks=1 00:34:11.492 00:34:11.492 ' 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:11.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.492 --rc genhtml_branch_coverage=1 00:34:11.492 --rc genhtml_function_coverage=1 00:34:11.492 --rc genhtml_legend=1 00:34:11.492 --rc geninfo_all_blocks=1 00:34:11.492 --rc geninfo_unexecuted_blocks=1 00:34:11.492 00:34:11.492 ' 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:11.492 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:11.752 14:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:19.889 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:19.889 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:19.889 Found net devices under 0000:31:00.0: cvl_0_0 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:19.889 Found net devices under 0000:31:00.1: cvl_0_1 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:19.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:19.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:34:19.889 00:34:19.889 --- 10.0.0.2 ping statistics --- 00:34:19.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.889 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:19.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:19.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:34:19.889 00:34:19.889 --- 10.0.0.1 ping statistics --- 00:34:19.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.889 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2677449 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2677449 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 2677449 ']' 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.889 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:19.890 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.890 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:19.890 14:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.890 [2024-11-06 14:16:05.489265] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:19.890 [2024-11-06 14:16:05.490461] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:34:19.890 [2024-11-06 14:16:05.490513] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:19.890 [2024-11-06 14:16:05.590254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:19.890 [2024-11-06 14:16:05.644155] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.890 [2024-11-06 14:16:05.644206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.890 [2024-11-06 14:16:05.644214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.890 [2024-11-06 14:16:05.644221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.890 [2024-11-06 14:16:05.644232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.890 [2024-11-06 14:16:05.646312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:19.890 [2024-11-06 14:16:05.646451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:19.890 [2024-11-06 14:16:05.646610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:19.890 [2024-11-06 14:16:05.646611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.890 [2024-11-06 14:16:05.725161] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:19.890 [2024-11-06 14:16:05.726219] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:19.890 [2024-11-06 14:16:05.726480] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:19.890 [2024-11-06 14:16:05.727094] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:19.890 [2024-11-06 14:16:05.727138] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:20.150 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:20.150 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.151 [2024-11-06 14:16:06.343607] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.151 Malloc0 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.151 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.411 [2024-11-06 14:16:06.443799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:20.411 test case1: single bdev can't be used in multiple subsystems 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.411 [2024-11-06 14:16:06.479242] bdev.c:8318:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:20.411 [2024-11-06 14:16:06.479269] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:20.411 [2024-11-06 14:16:06.479277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.411 request: 00:34:20.411 { 00:34:20.411 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:20.411 "namespace": { 00:34:20.411 "bdev_name": "Malloc0", 00:34:20.411 "no_auto_visible": false 00:34:20.411 }, 00:34:20.411 "method": "nvmf_subsystem_add_ns", 00:34:20.411 "req_id": 1 00:34:20.411 } 00:34:20.411 Got JSON-RPC error response 00:34:20.411 response: 00:34:20.411 { 00:34:20.411 "code": -32602, 00:34:20.411 "message": "Invalid parameters" 00:34:20.411 } 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:20.411 Adding namespace failed - expected result. 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:20.411 test case2: host connect to nvmf target in multiple paths 00:34:20.411 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:20.412 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.412 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.412 [2024-11-06 14:16:06.491387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:20.412 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.412 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:20.673 14:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:21.243 14:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:21.243 14:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:34:21.243 14:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:21.243 14:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:34:21.243 14:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:34:23.157 14:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:23.157 14:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:23.157 14:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:23.157 14:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:34:23.157 14:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:23.157 14:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:34:23.158 14:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:23.158 [global] 00:34:23.158 thread=1 00:34:23.158 invalidate=1 00:34:23.158 rw=write 00:34:23.158 time_based=1 00:34:23.158 runtime=1 00:34:23.158 ioengine=libaio 00:34:23.158 direct=1 00:34:23.158 bs=4096 00:34:23.158 iodepth=1 00:34:23.158 norandommap=0 00:34:23.158 numjobs=1 00:34:23.158 00:34:23.158 verify_dump=1 00:34:23.158 verify_backlog=512 00:34:23.158 verify_state_save=0 00:34:23.158 do_verify=1 00:34:23.158 verify=crc32c-intel 00:34:23.158 [job0] 00:34:23.158 filename=/dev/nvme0n1 00:34:23.158 Could not set queue depth (nvme0n1) 00:34:23.739 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:23.739 fio-3.35 00:34:23.739 Starting 1 thread 00:34:24.681 00:34:24.681 job0: (groupid=0, jobs=1): err= 0: pid=2678346: Wed Nov 6 14:16:10 2024 00:34:24.681 read: IOPS=18, BW=73.1KiB/s (74.8kB/s)(76.0KiB/1040msec) 00:34:24.681 slat (nsec): min=26074, max=27284, avg=26407.74, stdev=314.42 00:34:24.681 clat (usec): min=882, max=42999, avg=39702.00, stdev=9411.09 00:34:24.681 lat (usec): min=908, max=43026, avg=39728.40, stdev=9411.03 00:34:24.681 clat percentiles (usec): 00:34:24.681 | 1.00th=[ 881], 5.00th=[ 881], 10.00th=[41157], 20.00th=[41157], 00:34:24.681 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:24.681 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43254], 00:34:24.681 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:34:24.681 | 99.99th=[43254] 00:34:24.681 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:34:24.681 slat (usec): min=9, max=26798, avg=78.39, stdev=1183.24 00:34:24.681 clat (usec): min=197, max=1090, avg=464.70, stdev=96.81 00:34:24.681 lat (usec): min=231, max=27247, avg=543.09, stdev=1186.63 00:34:24.681 clat percentiles (usec): 00:34:24.681 | 1.00th=[ 253], 5.00th=[ 330], 10.00th=[ 347], 20.00th=[ 375], 00:34:24.681 | 30.00th=[ 424], 40.00th=[ 449], 50.00th=[ 469], 60.00th=[ 482], 00:34:24.681 | 70.00th=[ 502], 80.00th=[ 545], 90.00th=[ 578], 95.00th=[ 611], 00:34:24.681 | 99.00th=[ 725], 99.50th=[ 791], 99.90th=[ 1090], 99.95th=[ 1090], 00:34:24.681 | 99.99th=[ 1090] 00:34:24.681 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:24.681 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:24.681 lat (usec) : 250=0.94%, 500=65.54%, 750=29.19%, 1000=0.75% 00:34:24.681 lat (msec) : 2=0.19%, 50=3.39% 00:34:24.681 cpu : usr=0.87%, sys=0.96%, ctx=534, majf=0, minf=1 00:34:24.681 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:24.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.681 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.681 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.681 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:24.681 00:34:24.681 Run status group 0 (all jobs): 00:34:24.681 READ: bw=73.1KiB/s (74.8kB/s), 73.1KiB/s-73.1KiB/s (74.8kB/s-74.8kB/s), io=76.0KiB (77.8kB), run=1040-1040msec 00:34:24.681 WRITE: bw=1969KiB/s (2016kB/s), 1969KiB/s-1969KiB/s (2016kB/s-2016kB/s), io=2048KiB (2097kB), run=1040-1040msec 00:34:24.681 00:34:24.681 Disk stats (read/write): 00:34:24.681 nvme0n1: ios=41/512, merge=0/0, ticks=1551/226, in_queue=1777, util=98.40% 00:34:24.681 14:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:24.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:24.941 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:24.941 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:34:24.941 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:24.941 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:24.941 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:24.941 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:24.941 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:34:24.941 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:24.941 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:24.941 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:24.941 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:24.941 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:24.941 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:24.941 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:24.941 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:24.941 rmmod nvme_tcp 00:34:24.941 rmmod nvme_fabrics 00:34:24.941 rmmod nvme_keyring 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2677449 ']' 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2677449 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 2677449 ']' 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 2677449 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2677449 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2677449' 00:34:25.202 killing process with pid 2677449 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 2677449 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 2677449 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:25.202 14:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:27.749 00:34:27.749 real 0m15.948s 00:34:27.749 user 0m36.090s 00:34:27.749 sys 0m7.453s 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:27.749 ************************************ 00:34:27.749 END TEST nvmf_nmic 00:34:27.749 ************************************ 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:27.749 ************************************ 00:34:27.749 START TEST nvmf_fio_target 00:34:27.749 ************************************ 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:27.749 * Looking for test storage... 00:34:27.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:27.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.749 --rc genhtml_branch_coverage=1 00:34:27.749 --rc genhtml_function_coverage=1 00:34:27.749 --rc genhtml_legend=1 00:34:27.749 --rc geninfo_all_blocks=1 00:34:27.749 --rc geninfo_unexecuted_blocks=1 00:34:27.749 00:34:27.749 ' 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:27.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.749 --rc genhtml_branch_coverage=1 00:34:27.749 --rc genhtml_function_coverage=1 00:34:27.749 --rc genhtml_legend=1 00:34:27.749 --rc geninfo_all_blocks=1 00:34:27.749 --rc geninfo_unexecuted_blocks=1 00:34:27.749 00:34:27.749 ' 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:27.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.749 --rc genhtml_branch_coverage=1 00:34:27.749 --rc genhtml_function_coverage=1 00:34:27.749 --rc genhtml_legend=1 00:34:27.749 --rc geninfo_all_blocks=1 00:34:27.749 --rc geninfo_unexecuted_blocks=1 00:34:27.749 00:34:27.749 ' 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:27.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.749 --rc genhtml_branch_coverage=1 00:34:27.749 --rc genhtml_function_coverage=1 00:34:27.749 --rc genhtml_legend=1 00:34:27.749 --rc geninfo_all_blocks=1 00:34:27.749 --rc geninfo_unexecuted_blocks=1 00:34:27.749 00:34:27.749 ' 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.749 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:27.750 14:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:35.898 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:35.898 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:35.898 Found net devices under 0000:31:00.0: cvl_0_0 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.898 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:35.899 Found net devices under 0000:31:00.1: cvl_0_1 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:35.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:35.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:34:35.899 00:34:35.899 --- 10.0.0.2 ping statistics --- 00:34:35.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.899 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:35.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:35.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:34:35.899 00:34:35.899 --- 10.0.0.1 ping statistics --- 00:34:35.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.899 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2682945 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2682945 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 2682945 ']' 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:35.899 14:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.899 [2024-11-06 14:16:21.485108] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:35.899 [2024-11-06 14:16:21.486310] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:34:35.899 [2024-11-06 14:16:21.486362] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:35.899 [2024-11-06 14:16:21.585374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:35.899 [2024-11-06 14:16:21.638265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:35.899 [2024-11-06 14:16:21.638315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:35.899 [2024-11-06 14:16:21.638323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:35.899 [2024-11-06 14:16:21.638330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:35.899 [2024-11-06 14:16:21.638337] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:35.899 [2024-11-06 14:16:21.640781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.899 [2024-11-06 14:16:21.640876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:35.899 [2024-11-06 14:16:21.641196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:35.899 [2024-11-06 14:16:21.641199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.899 [2024-11-06 14:16:21.719782] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:35.899 [2024-11-06 14:16:21.720683] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:35.899 [2024-11-06 14:16:21.720982] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:35.899 [2024-11-06 14:16:21.721498] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:35.899 [2024-11-06 14:16:21.721521] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:36.161 14:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:36.161 14:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:34:36.161 14:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:36.161 14:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:36.161 14:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:36.161 14:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:36.161 14:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:36.422 [2024-11-06 14:16:22.510229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:36.422 14:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:36.684 14:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:36.684 14:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:36.945 14:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:36.945 14:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:36.945 14:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:36.945 14:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:37.207 14:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:37.207 14:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:37.468 14:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:37.786 14:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:37.786 14:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:37.786 14:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:37.786 14:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:38.104 14:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:38.104 14:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:38.104 14:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:38.397 14:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:38.397 14:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:38.659 14:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:38.659 14:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:38.659 14:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:38.920 [2024-11-06 14:16:25.090154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.920 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:39.181 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:39.442 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:40.016 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:40.016 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:34:40.016 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:40.016 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:34:40.016 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:34:40.016 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:34:41.933 14:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:41.933 14:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:41.933 14:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:41.933 14:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:34:41.933 14:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:41.933 14:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:34:41.933 14:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:41.933 [global] 00:34:41.933 thread=1 00:34:41.933 invalidate=1 00:34:41.933 rw=write 00:34:41.933 time_based=1 00:34:41.933 runtime=1 00:34:41.933 ioengine=libaio 00:34:41.933 direct=1 00:34:41.933 bs=4096 00:34:41.933 iodepth=1 00:34:41.933 norandommap=0 00:34:41.933 numjobs=1 00:34:41.933 00:34:41.933 verify_dump=1 00:34:41.933 verify_backlog=512 00:34:41.933 verify_state_save=0 00:34:41.933 do_verify=1 00:34:41.933 verify=crc32c-intel 00:34:41.933 [job0] 00:34:41.933 filename=/dev/nvme0n1 00:34:41.933 [job1] 00:34:41.933 filename=/dev/nvme0n2 00:34:41.933 [job2] 00:34:41.933 filename=/dev/nvme0n3 00:34:41.933 [job3] 00:34:41.933 filename=/dev/nvme0n4 00:34:41.933 Could not set queue depth (nvme0n1) 00:34:41.933 Could not set queue depth (nvme0n2) 00:34:41.933 Could not set queue depth (nvme0n3) 00:34:41.933 Could not set queue depth (nvme0n4) 00:34:42.193 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:42.193 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:42.193 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:42.193 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:42.193 fio-3.35 00:34:42.193 Starting 4 threads 00:34:43.582 00:34:43.582 job0: (groupid=0, jobs=1): err= 0: pid=2684391: Wed Nov 6 14:16:29 2024 00:34:43.582 read: IOPS=17, BW=70.8KiB/s (72.5kB/s)(72.0KiB/1017msec) 00:34:43.582 slat (nsec): min=3844, max=6088, avg=4307.61, stdev=529.50 00:34:43.582 clat (usec): min=628, max=41192, avg=38756.26, stdev=9515.81 00:34:43.582 lat (usec): min=634, max=41196, avg=38760.57, stdev=9515.37 00:34:43.582 clat percentiles (usec): 00:34:43.582 | 1.00th=[ 627], 5.00th=[ 627], 10.00th=[40633], 20.00th=[41157], 00:34:43.582 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:43.582 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:43.582 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:43.582 | 99.99th=[41157] 00:34:43.582 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:34:43.582 slat (nsec): min=4375, max=90337, avg=24484.83, stdev=13605.79 00:34:43.582 clat (usec): min=129, max=970, avg=593.99, stdev=179.69 00:34:43.582 lat (usec): min=134, max=1003, avg=618.48, stdev=190.40 00:34:43.582 clat percentiles (usec): 00:34:43.582 | 1.00th=[ 190], 5.00th=[ 255], 10.00th=[ 310], 20.00th=[ 404], 00:34:43.582 | 30.00th=[ 529], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 693], 00:34:43.582 | 70.00th=[ 717], 80.00th=[ 742], 90.00th=[ 783], 95.00th=[ 799], 00:34:43.582 | 99.00th=[ 873], 99.50th=[ 898], 99.90th=[ 971], 99.95th=[ 971], 00:34:43.582 | 99.99th=[ 971] 00:34:43.582 bw ( KiB/s): min= 4096, max= 4096, per=50.85%, avg=4096.00, stdev= 0.00, samples=1 00:34:43.582 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:43.582 lat (usec) : 250=4.53%, 500=21.32%, 750=54.15%, 1000=16.79% 00:34:43.582 lat (msec) : 50=3.21% 00:34:43.582 cpu : usr=0.59%, sys=1.18%, ctx=530, majf=0, minf=1 00:34:43.582 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:43.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.582 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.582 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:43.582 job1: (groupid=0, jobs=1): err= 0: pid=2684419: Wed Nov 6 14:16:29 2024 00:34:43.582 read: IOPS=27, BW=111KiB/s (114kB/s)(112KiB/1008msec) 00:34:43.582 slat (nsec): min=7937, max=28623, avg=25558.00, stdev=4769.90 00:34:43.582 clat (usec): min=423, max=42035, avg=24255.30, stdev=20710.24 00:34:43.582 lat (usec): min=449, max=42062, avg=24280.86, stdev=20710.66 00:34:43.582 clat percentiles (usec): 00:34:43.582 | 1.00th=[ 424], 5.00th=[ 586], 10.00th=[ 652], 20.00th=[ 750], 00:34:43.582 | 30.00th=[ 848], 40.00th=[ 1106], 50.00th=[41681], 60.00th=[41681], 00:34:43.582 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:43.582 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:43.582 | 99.99th=[42206] 00:34:43.582 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:34:43.582 slat (nsec): min=9261, max=53522, avg=30054.49, stdev=10125.91 00:34:43.582 clat (usec): min=241, max=967, avg=602.61, stdev=135.44 00:34:43.582 lat (usec): min=252, max=1001, avg=632.66, stdev=139.53 00:34:43.582 clat percentiles (usec): 00:34:43.582 | 1.00th=[ 318], 5.00th=[ 375], 10.00th=[ 424], 20.00th=[ 486], 00:34:43.582 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 603], 60.00th=[ 644], 00:34:43.582 | 70.00th=[ 676], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 807], 00:34:43.582 | 99.00th=[ 906], 99.50th=[ 963], 99.90th=[ 971], 99.95th=[ 971], 00:34:43.582 | 99.99th=[ 971] 00:34:43.582 bw ( KiB/s): min= 4096, max= 4096, per=50.85%, avg=4096.00, stdev= 0.00, samples=1 00:34:43.582 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:43.582 lat (usec) : 250=0.37%, 500=23.33%, 750=57.59%, 1000=15.37% 00:34:43.582 lat (msec) : 2=0.37%, 50=2.96% 00:34:43.582 cpu : usr=1.69%, sys=1.29%, ctx=540, majf=0, minf=2 00:34:43.582 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:43.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.582 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.582 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:43.582 job2: (groupid=0, jobs=1): err= 0: pid=2684447: Wed Nov 6 14:16:29 2024 00:34:43.582 read: IOPS=377, BW=1510KiB/s (1547kB/s)(1512KiB/1001msec) 00:34:43.582 slat (nsec): min=7220, max=61048, avg=24343.29, stdev=9405.26 00:34:43.582 clat (usec): min=414, max=41490, avg=1858.84, stdev=6472.30 00:34:43.582 lat (usec): min=430, max=41548, avg=1883.18, stdev=6473.49 00:34:43.582 clat percentiles (usec): 00:34:43.582 | 1.00th=[ 478], 5.00th=[ 644], 10.00th=[ 676], 20.00th=[ 701], 00:34:43.582 | 30.00th=[ 766], 40.00th=[ 791], 50.00th=[ 799], 60.00th=[ 807], 00:34:43.582 | 70.00th=[ 824], 80.00th=[ 832], 90.00th=[ 857], 95.00th=[ 881], 00:34:43.582 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:34:43.582 | 99.99th=[41681] 00:34:43.582 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:43.582 slat (usec): min=10, max=39959, avg=104.58, stdev=1764.83 00:34:43.582 clat (usec): min=114, max=881, avg=444.61, stdev=113.81 00:34:43.582 lat (usec): min=125, max=40299, avg=549.19, stdev=1764.26 00:34:43.582 clat percentiles (usec): 00:34:43.582 | 1.00th=[ 210], 5.00th=[ 293], 10.00th=[ 318], 20.00th=[ 355], 00:34:43.582 | 30.00th=[ 371], 40.00th=[ 400], 50.00th=[ 449], 60.00th=[ 478], 00:34:43.582 | 70.00th=[ 494], 80.00th=[ 515], 90.00th=[ 570], 95.00th=[ 644], 00:34:43.582 | 99.00th=[ 824], 99.50th=[ 840], 99.90th=[ 881], 99.95th=[ 881], 00:34:43.582 | 99.99th=[ 881] 00:34:43.582 bw ( KiB/s): min= 4096, max= 4096, per=50.85%, avg=4096.00, stdev= 0.00, samples=1 00:34:43.582 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:43.582 lat (usec) : 250=1.12%, 500=41.80%, 750=24.83%, 1000=30.90% 00:34:43.582 lat (msec) : 2=0.11%, 10=0.11%, 50=1.12% 00:34:43.582 cpu : usr=1.30%, sys=2.30%, ctx=894, majf=0, minf=1 00:34:43.582 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:43.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.582 issued rwts: total=378,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.582 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:43.582 job3: (groupid=0, jobs=1): err= 0: pid=2684457: Wed Nov 6 14:16:29 2024 00:34:43.582 read: IOPS=15, BW=63.0KiB/s (64.5kB/s)(64.0KiB/1016msec) 00:34:43.582 slat (nsec): min=27591, max=46607, avg=28953.69, stdev=4709.55 00:34:43.582 clat (usec): min=40841, max=42043, avg=41616.75, stdev=469.31 00:34:43.582 lat (usec): min=40869, max=42070, avg=41645.70, stdev=469.05 00:34:43.582 clat percentiles (usec): 00:34:43.583 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:43.583 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:34:43.583 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:43.583 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:43.583 | 99.99th=[42206] 00:34:43.583 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:34:43.583 slat (nsec): min=9638, max=57625, avg=32307.31, stdev=9960.29 00:34:43.583 clat (usec): min=279, max=1004, avg=640.73, stdev=125.10 00:34:43.583 lat (usec): min=290, max=1039, avg=673.03, stdev=128.80 00:34:43.583 clat percentiles (usec): 00:34:43.583 | 1.00th=[ 355], 5.00th=[ 437], 10.00th=[ 478], 20.00th=[ 537], 00:34:43.583 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 676], 00:34:43.583 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 807], 95.00th=[ 857], 00:34:43.583 | 99.00th=[ 906], 99.50th=[ 947], 99.90th=[ 1004], 99.95th=[ 1004], 00:34:43.583 | 99.99th=[ 1004] 00:34:43.583 bw ( KiB/s): min= 4096, max= 4096, per=50.85%, avg=4096.00, stdev= 0.00, samples=1 00:34:43.583 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:43.583 lat (usec) : 500=13.07%, 750=65.91%, 1000=17.80% 00:34:43.583 lat (msec) : 2=0.19%, 50=3.03% 00:34:43.583 cpu : usr=1.18%, sys=1.87%, ctx=530, majf=0, minf=1 00:34:43.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:43.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.583 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:43.583 00:34:43.583 Run status group 0 (all jobs): 00:34:43.583 READ: bw=1731KiB/s (1772kB/s), 63.0KiB/s-1510KiB/s (64.5kB/s-1547kB/s), io=1760KiB (1802kB), run=1001-1017msec 00:34:43.583 WRITE: bw=8055KiB/s (8248kB/s), 2014KiB/s-2046KiB/s (2062kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1017msec 00:34:43.583 00:34:43.583 Disk stats (read/write): 00:34:43.583 nvme0n1: ios=62/512, merge=0/0, ticks=556/301, in_queue=857, util=86.27% 00:34:43.583 nvme0n2: ios=72/512, merge=0/0, ticks=517/265, in_queue=782, util=85.58% 00:34:43.583 nvme0n3: ios=167/512, merge=0/0, ticks=727/211, in_queue=938, util=92.49% 00:34:43.583 nvme0n4: ios=60/512, merge=0/0, ticks=869/271, in_queue=1140, util=97.17% 00:34:43.583 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:43.583 [global] 00:34:43.583 thread=1 00:34:43.583 invalidate=1 00:34:43.583 rw=randwrite 00:34:43.583 time_based=1 00:34:43.583 runtime=1 00:34:43.583 ioengine=libaio 00:34:43.583 direct=1 00:34:43.583 bs=4096 00:34:43.583 iodepth=1 00:34:43.583 norandommap=0 00:34:43.583 numjobs=1 00:34:43.583 00:34:43.583 verify_dump=1 00:34:43.583 verify_backlog=512 00:34:43.583 verify_state_save=0 00:34:43.583 do_verify=1 00:34:43.583 verify=crc32c-intel 00:34:43.583 [job0] 00:34:43.583 filename=/dev/nvme0n1 00:34:43.583 [job1] 00:34:43.583 filename=/dev/nvme0n2 00:34:43.583 [job2] 00:34:43.583 filename=/dev/nvme0n3 00:34:43.583 [job3] 00:34:43.583 filename=/dev/nvme0n4 00:34:43.583 Could not set queue depth (nvme0n1) 00:34:43.583 Could not set queue depth (nvme0n2) 00:34:43.583 Could not set queue depth (nvme0n3) 00:34:43.583 Could not set queue depth (nvme0n4) 00:34:44.154 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:44.154 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:44.154 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:44.154 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:44.154 fio-3.35 00:34:44.154 Starting 4 threads 00:34:45.094 00:34:45.094 job0: (groupid=0, jobs=1): err= 0: pid=2684868: Wed Nov 6 14:16:31 2024 00:34:45.094 read: IOPS=17, BW=70.9KiB/s (72.6kB/s)(72.0KiB/1015msec) 00:34:45.094 slat (nsec): min=27825, max=28525, avg=28059.94, stdev=158.34 00:34:45.094 clat (usec): min=40711, max=42030, avg=41302.34, stdev=455.11 00:34:45.094 lat (usec): min=40739, max=42058, avg=41330.40, stdev=455.08 00:34:45.094 clat percentiles (usec): 00:34:45.094 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:34:45.094 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:45.094 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:34:45.094 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:45.094 | 99.99th=[42206] 00:34:45.094 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:34:45.094 slat (nsec): min=8665, max=64212, avg=31919.91, stdev=9763.35 00:34:45.094 clat (usec): min=158, max=799, avg=486.09, stdev=102.48 00:34:45.094 lat (usec): min=198, max=841, avg=518.01, stdev=105.62 00:34:45.094 clat percentiles (usec): 00:34:45.094 | 1.00th=[ 273], 5.00th=[ 330], 10.00th=[ 359], 20.00th=[ 388], 00:34:45.094 | 30.00th=[ 424], 40.00th=[ 465], 50.00th=[ 502], 60.00th=[ 519], 00:34:45.094 | 70.00th=[ 537], 80.00th=[ 553], 90.00th=[ 603], 95.00th=[ 668], 00:34:45.094 | 99.00th=[ 742], 99.50th=[ 758], 99.90th=[ 799], 99.95th=[ 799], 00:34:45.094 | 99.99th=[ 799] 00:34:45.094 bw ( KiB/s): min= 4104, max= 4104, per=47.34%, avg=4104.00, stdev= 0.00, samples=1 00:34:45.094 iops : min= 1026, max= 1026, avg=1026.00, stdev= 0.00, samples=1 00:34:45.095 lat (usec) : 250=0.75%, 500=46.98%, 750=48.11%, 1000=0.75% 00:34:45.095 lat (msec) : 50=3.40% 00:34:45.095 cpu : usr=1.08%, sys=2.07%, ctx=531, majf=0, minf=1 00:34:45.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.095 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:45.095 job1: (groupid=0, jobs=1): err= 0: pid=2684884: Wed Nov 6 14:16:31 2024 00:34:45.095 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:45.095 slat (nsec): min=8475, max=60176, avg=25940.82, stdev=3354.98 00:34:45.095 clat (usec): min=699, max=1332, avg=1087.13, stdev=92.59 00:34:45.095 lat (usec): min=725, max=1358, avg=1113.08, stdev=92.64 00:34:45.095 clat percentiles (usec): 00:34:45.095 | 1.00th=[ 783], 5.00th=[ 914], 10.00th=[ 963], 20.00th=[ 1020], 00:34:45.095 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:34:45.095 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1221], 00:34:45.095 | 99.00th=[ 1254], 99.50th=[ 1303], 99.90th=[ 1336], 99.95th=[ 1336], 00:34:45.095 | 99.99th=[ 1336] 00:34:45.095 write: IOPS=663, BW=2653KiB/s (2717kB/s)(2656KiB/1001msec); 0 zone resets 00:34:45.095 slat (nsec): min=4485, max=68507, avg=30907.96, stdev=8470.93 00:34:45.095 clat (usec): min=247, max=996, avg=600.66, stdev=137.89 00:34:45.095 lat (usec): min=281, max=1029, avg=631.57, stdev=140.08 00:34:45.095 clat percentiles (usec): 00:34:45.095 | 1.00th=[ 281], 5.00th=[ 392], 10.00th=[ 429], 20.00th=[ 486], 00:34:45.095 | 30.00th=[ 523], 40.00th=[ 562], 50.00th=[ 603], 60.00th=[ 635], 00:34:45.095 | 70.00th=[ 668], 80.00th=[ 717], 90.00th=[ 783], 95.00th=[ 848], 00:34:45.095 | 99.00th=[ 938], 99.50th=[ 979], 99.90th=[ 996], 99.95th=[ 996], 00:34:45.095 | 99.99th=[ 996] 00:34:45.095 bw ( KiB/s): min= 4096, max= 4096, per=47.24%, avg=4096.00, stdev= 0.00, samples=1 00:34:45.095 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:45.095 lat (usec) : 250=0.09%, 500=13.52%, 750=35.46%, 1000=13.95% 00:34:45.095 lat (msec) : 2=36.99% 00:34:45.095 cpu : usr=1.60%, sys=3.70%, ctx=1182, majf=0, minf=1 00:34:45.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.095 issued rwts: total=512,664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:45.095 job2: (groupid=0, jobs=1): err= 0: pid=2684903: Wed Nov 6 14:16:31 2024 00:34:45.095 read: IOPS=15, BW=63.5KiB/s (65.0kB/s)(64.0KiB/1008msec) 00:34:45.095 slat (nsec): min=26414, max=27243, avg=26666.81, stdev=221.02 00:34:45.095 clat (usec): min=40906, max=42187, avg=41750.93, stdev=389.42 00:34:45.095 lat (usec): min=40934, max=42214, avg=41777.60, stdev=389.28 00:34:45.095 clat percentiles (usec): 00:34:45.095 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:34:45.095 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:34:45.095 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:45.095 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:45.095 | 99.99th=[42206] 00:34:45.095 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:34:45.095 slat (nsec): min=4797, max=63975, avg=32190.44, stdev=7675.45 00:34:45.095 clat (usec): min=219, max=1130, avg=619.54, stdev=154.22 00:34:45.095 lat (usec): min=230, max=1165, avg=651.73, stdev=155.76 00:34:45.095 clat percentiles (usec): 00:34:45.095 | 1.00th=[ 302], 5.00th=[ 379], 10.00th=[ 420], 20.00th=[ 502], 00:34:45.095 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:34:45.095 | 70.00th=[ 676], 80.00th=[ 766], 90.00th=[ 840], 95.00th=[ 889], 00:34:45.095 | 99.00th=[ 988], 99.50th=[ 1037], 99.90th=[ 1123], 99.95th=[ 1123], 00:34:45.095 | 99.99th=[ 1123] 00:34:45.095 bw ( KiB/s): min= 4104, max= 4104, per=47.34%, avg=4104.00, stdev= 0.00, samples=1 00:34:45.095 iops : min= 1026, max= 1026, avg=1026.00, stdev= 0.00, samples=1 00:34:45.095 lat (usec) : 250=0.38%, 500=18.56%, 750=57.20%, 1000=19.89% 00:34:45.095 lat (msec) : 2=0.95%, 50=3.03% 00:34:45.095 cpu : usr=0.79%, sys=1.69%, ctx=530, majf=0, minf=1 00:34:45.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.095 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:45.095 job3: (groupid=0, jobs=1): err= 0: pid=2684908: Wed Nov 6 14:16:31 2024 00:34:45.095 read: IOPS=241, BW=967KiB/s (991kB/s)(976KiB/1009msec) 00:34:45.095 slat (nsec): min=7551, max=46188, avg=26795.10, stdev=6190.36 00:34:45.095 clat (usec): min=170, max=41169, avg=3346.49, stdev=10318.84 00:34:45.095 lat (usec): min=197, max=41197, avg=3373.29, stdev=10319.10 00:34:45.095 clat percentiles (usec): 00:34:45.095 | 1.00th=[ 200], 5.00th=[ 293], 10.00th=[ 338], 20.00th=[ 420], 00:34:45.095 | 30.00th=[ 453], 40.00th=[ 478], 50.00th=[ 523], 60.00th=[ 611], 00:34:45.095 | 70.00th=[ 660], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[41157], 00:34:45.095 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:45.095 | 99.99th=[41157] 00:34:45.095 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:34:45.095 slat (nsec): min=9882, max=53849, avg=24652.41, stdev=12005.57 00:34:45.095 clat (usec): min=106, max=670, avg=324.76, stdev=111.98 00:34:45.095 lat (usec): min=117, max=706, avg=349.41, stdev=115.49 00:34:45.095 clat percentiles (usec): 00:34:45.095 | 1.00th=[ 113], 5.00th=[ 117], 10.00th=[ 178], 20.00th=[ 239], 00:34:45.095 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 322], 60.00th=[ 343], 00:34:45.095 | 70.00th=[ 375], 80.00th=[ 412], 90.00th=[ 482], 95.00th=[ 523], 00:34:45.095 | 99.00th=[ 578], 99.50th=[ 635], 99.90th=[ 668], 99.95th=[ 668], 00:34:45.095 | 99.99th=[ 668] 00:34:45.095 bw ( KiB/s): min= 4104, max= 4104, per=47.34%, avg=4104.00, stdev= 0.00, samples=1 00:34:45.095 iops : min= 1026, max= 1026, avg=1026.00, stdev= 0.00, samples=1 00:34:45.095 lat (usec) : 250=16.14%, 500=60.19%, 750=20.63%, 1000=0.79% 00:34:45.095 lat (msec) : 50=2.25% 00:34:45.095 cpu : usr=0.79%, sys=2.08%, ctx=757, majf=0, minf=1 00:34:45.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.095 issued rwts: total=244,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:45.095 00:34:45.095 Run status group 0 (all jobs): 00:34:45.095 READ: bw=3113KiB/s (3188kB/s), 63.5KiB/s-2046KiB/s (65.0kB/s-2095kB/s), io=3160KiB (3236kB), run=1001-1015msec 00:34:45.095 WRITE: bw=8670KiB/s (8878kB/s), 2018KiB/s-2653KiB/s (2066kB/s-2717kB/s), io=8800KiB (9011kB), run=1001-1015msec 00:34:45.095 00:34:45.095 Disk stats (read/write): 00:34:45.095 nvme0n1: ios=56/512, merge=0/0, ticks=634/197, in_queue=831, util=86.67% 00:34:45.095 nvme0n2: ios=481/512, merge=0/0, ticks=1354/273, in_queue=1627, util=88.71% 00:34:45.095 nvme0n3: ios=62/512, merge=0/0, ticks=639/292, in_queue=931, util=95.28% 00:34:45.095 nvme0n4: ios=287/512, merge=0/0, ticks=749/153, in_queue=902, util=96.71% 00:34:45.095 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:45.354 [global] 00:34:45.354 thread=1 00:34:45.354 invalidate=1 00:34:45.354 rw=write 00:34:45.354 time_based=1 00:34:45.354 runtime=1 00:34:45.354 ioengine=libaio 00:34:45.354 direct=1 00:34:45.354 bs=4096 00:34:45.354 iodepth=128 00:34:45.354 norandommap=0 00:34:45.354 numjobs=1 00:34:45.354 00:34:45.354 verify_dump=1 00:34:45.354 verify_backlog=512 00:34:45.354 verify_state_save=0 00:34:45.354 do_verify=1 00:34:45.354 verify=crc32c-intel 00:34:45.354 [job0] 00:34:45.355 filename=/dev/nvme0n1 00:34:45.355 [job1] 00:34:45.355 filename=/dev/nvme0n2 00:34:45.355 [job2] 00:34:45.355 filename=/dev/nvme0n3 00:34:45.355 [job3] 00:34:45.355 filename=/dev/nvme0n4 00:34:45.355 Could not set queue depth (nvme0n1) 00:34:45.355 Could not set queue depth (nvme0n2) 00:34:45.355 Could not set queue depth (nvme0n3) 00:34:45.355 Could not set queue depth (nvme0n4) 00:34:45.615 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:45.615 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:45.615 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:45.615 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:45.615 fio-3.35 00:34:45.615 Starting 4 threads 00:34:47.000 00:34:47.000 job0: (groupid=0, jobs=1): err= 0: pid=2685355: Wed Nov 6 14:16:33 2024 00:34:47.000 read: IOPS=6866, BW=26.8MiB/s (28.1MB/s)(27.0MiB/1006msec) 00:34:47.000 slat (nsec): min=1013, max=7864.7k, avg=57911.06, stdev=490241.32 00:34:47.000 clat (usec): min=2594, max=18342, avg=8357.46, stdev=2018.81 00:34:47.000 lat (usec): min=2667, max=18349, avg=8415.37, stdev=2056.86 00:34:47.000 clat percentiles (usec): 00:34:47.000 | 1.00th=[ 3589], 5.00th=[ 5800], 10.00th=[ 6521], 20.00th=[ 7177], 00:34:47.000 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8291], 00:34:47.000 | 70.00th=[ 8848], 80.00th=[ 9634], 90.00th=[10945], 95.00th=[12518], 00:34:47.000 | 99.00th=[14484], 99.50th=[15139], 99.90th=[18220], 99.95th=[18220], 00:34:47.000 | 99.99th=[18220] 00:34:47.000 write: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1006msec); 0 zone resets 00:34:47.000 slat (nsec): min=1708, max=11733k, avg=59050.35, stdev=455422.84 00:34:47.000 clat (usec): min=705, max=66996, avg=9741.83, stdev=8892.73 00:34:47.000 lat (usec): min=714, max=67006, avg=9800.88, stdev=8922.81 00:34:47.000 clat percentiles (usec): 00:34:47.000 | 1.00th=[ 2343], 5.00th=[ 4113], 10.00th=[ 4883], 20.00th=[ 5538], 00:34:47.000 | 30.00th=[ 6652], 40.00th=[ 7504], 50.00th=[ 7963], 60.00th=[ 8225], 00:34:47.000 | 70.00th=[ 8586], 80.00th=[10421], 90.00th=[11994], 95.00th=[28181], 00:34:47.000 | 99.00th=[56886], 99.50th=[60556], 99.90th=[66323], 99.95th=[66847], 00:34:47.000 | 99.99th=[66847] 00:34:47.000 bw ( KiB/s): min=24944, max=32400, per=25.63%, avg=28672.00, stdev=5272.19, samples=2 00:34:47.000 iops : min= 6236, max= 8100, avg=7168.00, stdev=1318.05, samples=2 00:34:47.000 lat (usec) : 750=0.04%, 1000=0.02% 00:34:47.000 lat (msec) : 2=0.30%, 4=2.63%, 10=78.18%, 20=15.91%, 50=2.12% 00:34:47.000 lat (msec) : 100=0.80% 00:34:47.000 cpu : usr=6.17%, sys=7.16%, ctx=481, majf=0, minf=2 00:34:47.000 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:47.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:47.000 issued rwts: total=6908,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:47.000 job1: (groupid=0, jobs=1): err= 0: pid=2685356: Wed Nov 6 14:16:33 2024 00:34:47.000 read: IOPS=7631, BW=29.8MiB/s (31.3MB/s)(29.9MiB/1004msec) 00:34:47.000 slat (nsec): min=985, max=7619.0k, avg=65393.15, stdev=515332.71 00:34:47.000 clat (usec): min=1931, max=21974, avg=8658.80, stdev=2387.37 00:34:47.000 lat (usec): min=3154, max=28543, avg=8724.19, stdev=2420.62 00:34:47.000 clat percentiles (usec): 00:34:47.000 | 1.00th=[ 4686], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 6980], 00:34:47.000 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7963], 60.00th=[ 8455], 00:34:47.000 | 70.00th=[ 9372], 80.00th=[10290], 90.00th=[11994], 95.00th=[13042], 00:34:47.000 | 99.00th=[18482], 99.50th=[19530], 99.90th=[19792], 99.95th=[21890], 00:34:47.000 | 99.99th=[21890] 00:34:47.000 write: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1004msec); 0 zone resets 00:34:47.000 slat (nsec): min=1688, max=10949k, avg=55772.38, stdev=376205.65 00:34:47.000 clat (usec): min=992, max=48419, avg=7891.25, stdev=4043.76 00:34:47.000 lat (usec): min=1004, max=48426, avg=7947.02, stdev=4061.74 00:34:47.000 clat percentiles (usec): 00:34:47.000 | 1.00th=[ 3195], 5.00th=[ 4490], 10.00th=[ 4948], 20.00th=[ 5800], 00:34:47.000 | 30.00th=[ 6456], 40.00th=[ 7177], 50.00th=[ 7570], 60.00th=[ 7963], 00:34:47.000 | 70.00th=[ 8094], 80.00th=[ 8356], 90.00th=[10028], 95.00th=[11469], 00:34:47.000 | 99.00th=[31589], 99.50th=[33424], 99.90th=[43254], 99.95th=[43254], 00:34:47.000 | 99.99th=[48497] 00:34:47.000 bw ( KiB/s): min=29720, max=31720, per=27.46%, avg=30720.00, stdev=1414.21, samples=2 00:34:47.000 iops : min= 7430, max= 7930, avg=7680.00, stdev=353.55, samples=2 00:34:47.000 lat (usec) : 1000=0.03% 00:34:47.000 lat (msec) : 2=0.03%, 4=1.07%, 10=82.03%, 20=15.60%, 50=1.24% 00:34:47.000 cpu : usr=4.59%, sys=8.08%, ctx=644, majf=0, minf=1 00:34:47.000 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:47.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:47.000 issued rwts: total=7662,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:47.000 job2: (groupid=0, jobs=1): err= 0: pid=2685363: Wed Nov 6 14:16:33 2024 00:34:47.000 read: IOPS=6741, BW=26.3MiB/s (27.6MB/s)(26.5MiB/1006msec) 00:34:47.000 slat (nsec): min=982, max=8646.0k, avg=76400.07, stdev=603942.05 00:34:47.001 clat (usec): min=1841, max=18889, avg=9723.26, stdev=2452.76 00:34:47.001 lat (usec): min=4561, max=18891, avg=9799.66, stdev=2494.22 00:34:47.001 clat percentiles (usec): 00:34:47.001 | 1.00th=[ 5604], 5.00th=[ 7177], 10.00th=[ 7439], 20.00th=[ 7963], 00:34:47.001 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9503], 00:34:47.001 | 70.00th=[10028], 80.00th=[11076], 90.00th=[14091], 95.00th=[15008], 00:34:47.001 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18482], 99.95th=[19006], 00:34:47.001 | 99.99th=[19006] 00:34:47.001 write: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1006msec); 0 zone resets 00:34:47.001 slat (nsec): min=1725, max=8255.1k, avg=62364.03, stdev=451774.41 00:34:47.001 clat (usec): min=1189, max=18893, avg=8569.37, stdev=2033.93 00:34:47.001 lat (usec): min=1199, max=18895, avg=8631.74, stdev=2050.45 00:34:47.001 clat percentiles (usec): 00:34:47.001 | 1.00th=[ 4621], 5.00th=[ 5407], 10.00th=[ 5669], 20.00th=[ 6718], 00:34:47.001 | 30.00th=[ 7898], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8979], 00:34:47.001 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10945], 95.00th=[12256], 00:34:47.001 | 99.00th=[13435], 99.50th=[14615], 99.90th=[16909], 99.95th=[17171], 00:34:47.001 | 99.99th=[19006] 00:34:47.001 bw ( KiB/s): min=28656, max=28672, per=25.63%, avg=28664.00, stdev=11.31, samples=2 00:34:47.001 iops : min= 7164, max= 7168, avg=7166.00, stdev= 2.83, samples=2 00:34:47.001 lat (msec) : 2=0.07%, 4=0.14%, 10=77.44%, 20=22.35% 00:34:47.001 cpu : usr=5.87%, sys=6.27%, ctx=512, majf=0, minf=1 00:34:47.001 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:47.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:47.001 issued rwts: total=6782,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.001 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:47.001 job3: (groupid=0, jobs=1): err= 0: pid=2685364: Wed Nov 6 14:16:33 2024 00:34:47.001 read: IOPS=6087, BW=23.8MiB/s (24.9MB/s)(23.9MiB/1007msec) 00:34:47.001 slat (nsec): min=1019, max=9870.8k, avg=86073.05, stdev=665795.09 00:34:47.001 clat (usec): min=3600, max=47481, avg=10553.35, stdev=3724.72 00:34:47.001 lat (usec): min=3609, max=47490, avg=10639.43, stdev=3787.85 00:34:47.001 clat percentiles (usec): 00:34:47.001 | 1.00th=[ 5211], 5.00th=[ 7439], 10.00th=[ 7832], 20.00th=[ 8455], 00:34:47.001 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10290], 00:34:47.001 | 70.00th=[10814], 80.00th=[12125], 90.00th=[14746], 95.00th=[15926], 00:34:47.001 | 99.00th=[22676], 99.50th=[34341], 99.90th=[45351], 99.95th=[47449], 00:34:47.001 | 99.99th=[47449] 00:34:47.001 write: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec); 0 zone resets 00:34:47.001 slat (nsec): min=1730, max=11283k, avg=71788.54, stdev=479053.77 00:34:47.001 clat (usec): min=1177, max=61505, avg=10239.19, stdev=6925.94 00:34:47.001 lat (usec): min=1189, max=61509, avg=10310.98, stdev=6963.64 00:34:47.001 clat percentiles (usec): 00:34:47.001 | 1.00th=[ 3884], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 6718], 00:34:47.001 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9503], 00:34:47.001 | 70.00th=[ 9765], 80.00th=[11207], 90.00th=[13566], 95.00th=[18482], 00:34:47.001 | 99.00th=[52691], 99.50th=[58459], 99.90th=[61604], 99.95th=[61604], 00:34:47.001 | 99.99th=[61604] 00:34:47.001 bw ( KiB/s): min=24576, max=24576, per=21.97%, avg=24576.00, stdev= 0.00, samples=2 00:34:47.001 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:34:47.001 lat (msec) : 2=0.19%, 4=0.43%, 10=62.62%, 20=33.90%, 50=2.29% 00:34:47.001 lat (msec) : 100=0.57% 00:34:47.001 cpu : usr=3.88%, sys=6.76%, ctx=490, majf=0, minf=1 00:34:47.001 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:47.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:47.001 issued rwts: total=6130,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.001 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:47.001 00:34:47.001 Run status group 0 (all jobs): 00:34:47.001 READ: bw=107MiB/s (112MB/s), 23.8MiB/s-29.8MiB/s (24.9MB/s-31.3MB/s), io=107MiB (113MB), run=1004-1007msec 00:34:47.001 WRITE: bw=109MiB/s (115MB/s), 23.8MiB/s-29.9MiB/s (25.0MB/s-31.3MB/s), io=110MiB (115MB), run=1004-1007msec 00:34:47.001 00:34:47.001 Disk stats (read/write): 00:34:47.001 nvme0n1: ios=5681/5895, merge=0/0, ticks=44209/57688, in_queue=101897, util=85.87% 00:34:47.001 nvme0n2: ios=6268/6656, merge=0/0, ticks=50484/51452, in_queue=101936, util=91.14% 00:34:47.001 nvme0n3: ios=5684/6103, merge=0/0, ticks=51841/49651, in_queue=101492, util=95.27% 00:34:47.001 nvme0n4: ios=5176/5263, merge=0/0, ticks=51965/50518, in_queue=102483, util=95.22% 00:34:47.001 14:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:47.001 [global] 00:34:47.001 thread=1 00:34:47.001 invalidate=1 00:34:47.001 rw=randwrite 00:34:47.001 time_based=1 00:34:47.001 runtime=1 00:34:47.001 ioengine=libaio 00:34:47.001 direct=1 00:34:47.001 bs=4096 00:34:47.001 iodepth=128 00:34:47.001 norandommap=0 00:34:47.001 numjobs=1 00:34:47.001 00:34:47.001 verify_dump=1 00:34:47.001 verify_backlog=512 00:34:47.001 verify_state_save=0 00:34:47.001 do_verify=1 00:34:47.001 verify=crc32c-intel 00:34:47.001 [job0] 00:34:47.001 filename=/dev/nvme0n1 00:34:47.001 [job1] 00:34:47.001 filename=/dev/nvme0n2 00:34:47.001 [job2] 00:34:47.001 filename=/dev/nvme0n3 00:34:47.001 [job3] 00:34:47.001 filename=/dev/nvme0n4 00:34:47.001 Could not set queue depth (nvme0n1) 00:34:47.001 Could not set queue depth (nvme0n2) 00:34:47.001 Could not set queue depth (nvme0n3) 00:34:47.001 Could not set queue depth (nvme0n4) 00:34:47.261 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:47.262 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:47.262 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:47.262 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:47.262 fio-3.35 00:34:47.262 Starting 4 threads 00:34:48.647 00:34:48.647 job0: (groupid=0, jobs=1): err= 0: pid=2685867: Wed Nov 6 14:16:34 2024 00:34:48.647 read: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec) 00:34:48.647 slat (nsec): min=944, max=9930.6k, avg=62260.93, stdev=481671.63 00:34:48.647 clat (usec): min=2330, max=23690, avg=7937.14, stdev=2358.91 00:34:48.647 lat (usec): min=2334, max=23692, avg=7999.40, stdev=2391.50 00:34:48.647 clat percentiles (usec): 00:34:48.647 | 1.00th=[ 3458], 5.00th=[ 5604], 10.00th=[ 5866], 20.00th=[ 6194], 00:34:48.647 | 30.00th=[ 6521], 40.00th=[ 6849], 50.00th=[ 7242], 60.00th=[ 7832], 00:34:48.647 | 70.00th=[ 8455], 80.00th=[ 9765], 90.00th=[11207], 95.00th=[11863], 00:34:48.647 | 99.00th=[14877], 99.50th=[19006], 99.90th=[23200], 99.95th=[23725], 00:34:48.647 | 99.99th=[23725] 00:34:48.647 write: IOPS=8108, BW=31.7MiB/s (33.2MB/s)(31.8MiB/1003msec); 0 zone resets 00:34:48.647 slat (nsec): min=1563, max=10288k, avg=59478.07, stdev=359909.12 00:34:48.647 clat (usec): min=1111, max=24999, avg=8149.59, stdev=4034.45 00:34:48.647 lat (usec): min=1121, max=25003, avg=8209.06, stdev=4058.61 00:34:48.647 clat percentiles (usec): 00:34:48.647 | 1.00th=[ 2343], 5.00th=[ 4113], 10.00th=[ 4490], 20.00th=[ 5997], 00:34:48.647 | 30.00th=[ 6587], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7177], 00:34:48.647 | 70.00th=[ 7767], 80.00th=[ 9372], 90.00th=[13173], 95.00th=[18744], 00:34:48.647 | 99.00th=[22414], 99.50th=[23462], 99.90th=[23987], 99.95th=[25035], 00:34:48.647 | 99.99th=[25035] 00:34:48.647 bw ( KiB/s): min=30560, max=33488, per=32.44%, avg=32024.00, stdev=2070.41, samples=2 00:34:48.647 iops : min= 7640, max= 8372, avg=8006.00, stdev=517.60, samples=2 00:34:48.647 lat (msec) : 2=0.34%, 4=2.32%, 10=78.80%, 20=16.63%, 50=1.92% 00:34:48.647 cpu : usr=4.99%, sys=6.89%, ctx=697, majf=0, minf=1 00:34:48.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:48.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:48.647 issued rwts: total=7680,8133,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:48.647 job1: (groupid=0, jobs=1): err= 0: pid=2685868: Wed Nov 6 14:16:34 2024 00:34:48.647 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:34:48.647 slat (nsec): min=895, max=9276.3k, avg=150197.96, stdev=843196.89 00:34:48.647 clat (usec): min=6710, max=36315, avg=19163.46, stdev=6523.65 00:34:48.647 lat (usec): min=6712, max=36320, avg=19313.66, stdev=6530.23 00:34:48.647 clat percentiles (usec): 00:34:48.647 | 1.00th=[ 7308], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[14484], 00:34:48.647 | 30.00th=[15664], 40.00th=[16909], 50.00th=[19006], 60.00th=[20579], 00:34:48.647 | 70.00th=[23200], 80.00th=[24773], 90.00th=[27395], 95.00th=[29754], 00:34:48.647 | 99.00th=[34341], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:34:48.647 | 99.99th=[36439] 00:34:48.647 write: IOPS=3873, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1005msec); 0 zone resets 00:34:48.647 slat (nsec): min=1485, max=6381.5k, avg=113215.82, stdev=628942.65 00:34:48.647 clat (usec): min=3366, max=28967, avg=15012.42, stdev=5155.27 00:34:48.647 lat (usec): min=6353, max=28974, avg=15125.63, stdev=5151.46 00:34:48.647 clat percentiles (usec): 00:34:48.647 | 1.00th=[ 6849], 5.00th=[ 7373], 10.00th=[ 8225], 20.00th=[ 8848], 00:34:48.647 | 30.00th=[12256], 40.00th=[13304], 50.00th=[15401], 60.00th=[17433], 00:34:48.647 | 70.00th=[18220], 80.00th=[19530], 90.00th=[22152], 95.00th=[22676], 00:34:48.647 | 99.00th=[26346], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:34:48.647 | 99.99th=[28967] 00:34:48.647 bw ( KiB/s): min=13744, max=16384, per=15.26%, avg=15064.00, stdev=1866.76, samples=2 00:34:48.647 iops : min= 3436, max= 4096, avg=3766.00, stdev=466.69, samples=2 00:34:48.647 lat (msec) : 4=0.01%, 10=21.41%, 20=49.15%, 50=29.42% 00:34:48.647 cpu : usr=3.19%, sys=3.78%, ctx=307, majf=0, minf=1 00:34:48.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:48.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:48.647 issued rwts: total=3584,3893,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:48.647 job2: (groupid=0, jobs=1): err= 0: pid=2685869: Wed Nov 6 14:16:34 2024 00:34:48.647 read: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec) 00:34:48.647 slat (nsec): min=993, max=8799.3k, avg=71666.05, stdev=550402.00 00:34:48.647 clat (usec): min=2720, max=31747, avg=9232.94, stdev=3474.29 00:34:48.647 lat (usec): min=2724, max=33301, avg=9304.61, stdev=3515.75 00:34:48.647 clat percentiles (usec): 00:34:48.648 | 1.00th=[ 5342], 5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 7111], 00:34:48.648 | 30.00th=[ 7439], 40.00th=[ 7898], 50.00th=[ 8160], 60.00th=[ 8979], 00:34:48.648 | 70.00th=[ 9372], 80.00th=[10683], 90.00th=[12911], 95.00th=[15270], 00:34:48.648 | 99.00th=[24249], 99.50th=[25822], 99.90th=[30802], 99.95th=[30802], 00:34:48.648 | 99.99th=[31851] 00:34:48.648 write: IOPS=7448, BW=29.1MiB/s (30.5MB/s)(29.2MiB/1004msec); 0 zone resets 00:34:48.648 slat (nsec): min=1596, max=8016.0k, avg=59640.47, stdev=398578.96 00:34:48.648 clat (usec): min=1137, max=27785, avg=8177.60, stdev=3132.78 00:34:48.648 lat (usec): min=1175, max=28657, avg=8237.25, stdev=3153.08 00:34:48.648 clat percentiles (usec): 00:34:48.648 | 1.00th=[ 3032], 5.00th=[ 4883], 10.00th=[ 5407], 20.00th=[ 6259], 00:34:48.648 | 30.00th=[ 7308], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 7963], 00:34:48.648 | 70.00th=[ 8160], 80.00th=[ 8455], 90.00th=[11469], 95.00th=[13042], 00:34:48.648 | 99.00th=[22938], 99.50th=[25035], 99.90th=[27657], 99.95th=[27657], 00:34:48.648 | 99.99th=[27657] 00:34:48.648 bw ( KiB/s): min=26032, max=32768, per=29.78%, avg=29400.00, stdev=4763.07, samples=2 00:34:48.648 iops : min= 6508, max= 8192, avg=7350.00, stdev=1190.77, samples=2 00:34:48.648 lat (msec) : 2=0.12%, 4=1.43%, 10=79.15%, 20=17.34%, 50=1.97% 00:34:48.648 cpu : usr=4.09%, sys=6.88%, ctx=599, majf=0, minf=1 00:34:48.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:48.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:48.648 issued rwts: total=7168,7478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:48.648 job3: (groupid=0, jobs=1): err= 0: pid=2685870: Wed Nov 6 14:16:34 2024 00:34:48.648 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:34:48.648 slat (nsec): min=991, max=7442.4k, avg=94707.09, stdev=596870.69 00:34:48.648 clat (usec): min=5732, max=21979, avg=11867.62, stdev=2367.53 00:34:48.648 lat (usec): min=5739, max=21984, avg=11962.33, stdev=2422.84 00:34:48.648 clat percentiles (usec): 00:34:48.648 | 1.00th=[ 7570], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10028], 00:34:48.648 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11600], 60.00th=[12125], 00:34:48.648 | 70.00th=[12518], 80.00th=[13435], 90.00th=[14484], 95.00th=[16581], 00:34:48.648 | 99.00th=[19006], 99.50th=[20579], 99.90th=[21890], 99.95th=[21890], 00:34:48.648 | 99.99th=[21890] 00:34:48.648 write: IOPS=5273, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1005msec); 0 zone resets 00:34:48.648 slat (nsec): min=1641, max=11901k, avg=92590.83, stdev=521100.78 00:34:48.648 clat (usec): min=556, max=57819, avg=12590.50, stdev=6543.09 00:34:48.648 lat (usec): min=4128, max=58675, avg=12683.09, stdev=6583.89 00:34:48.648 clat percentiles (usec): 00:34:48.648 | 1.00th=[ 5538], 5.00th=[ 7635], 10.00th=[ 8225], 20.00th=[ 8979], 00:34:48.648 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[10945], 60.00th=[11469], 00:34:48.648 | 70.00th=[13829], 80.00th=[15270], 90.00th=[16909], 95.00th=[19792], 00:34:48.648 | 99.00th=[49546], 99.50th=[54264], 99.90th=[57934], 99.95th=[57934], 00:34:48.648 | 99.99th=[57934] 00:34:48.648 bw ( KiB/s): min=19256, max=22120, per=20.96%, avg=20688.00, stdev=2025.15, samples=2 00:34:48.648 iops : min= 4814, max= 5530, avg=5172.00, stdev=506.29, samples=2 00:34:48.648 lat (usec) : 750=0.01% 00:34:48.648 lat (msec) : 10=30.21%, 20=67.13%, 50=2.20%, 100=0.45% 00:34:48.648 cpu : usr=4.18%, sys=5.38%, ctx=429, majf=0, minf=1 00:34:48.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:48.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:48.648 issued rwts: total=5120,5300,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:48.648 00:34:48.648 Run status group 0 (all jobs): 00:34:48.648 READ: bw=91.5MiB/s (96.0MB/s), 13.9MiB/s-29.9MiB/s (14.6MB/s-31.4MB/s), io=92.0MiB (96.5MB), run=1003-1005msec 00:34:48.648 WRITE: bw=96.4MiB/s (101MB/s), 15.1MiB/s-31.7MiB/s (15.9MB/s-33.2MB/s), io=96.9MiB (102MB), run=1003-1005msec 00:34:48.648 00:34:48.648 Disk stats (read/write): 00:34:48.648 nvme0n1: ios=6510/6656, merge=0/0, ticks=49412/52520, in_queue=101932, util=89.68% 00:34:48.648 nvme0n2: ios=3122/3264, merge=0/0, ticks=15093/10657, in_queue=25750, util=92.57% 00:34:48.648 nvme0n3: ios=6174/6175, merge=0/0, ticks=48218/41590, in_queue=89808, util=99.48% 00:34:48.648 nvme0n4: ios=4195/4608, merge=0/0, ticks=24945/32774, in_queue=57719, util=97.56% 00:34:48.648 14:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:48.648 14:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2686197 00:34:48.648 14:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:48.648 14:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:48.648 [global] 00:34:48.648 thread=1 00:34:48.648 invalidate=1 00:34:48.648 rw=read 00:34:48.648 time_based=1 00:34:48.648 runtime=10 00:34:48.648 ioengine=libaio 00:34:48.648 direct=1 00:34:48.648 bs=4096 00:34:48.648 iodepth=1 00:34:48.648 norandommap=1 00:34:48.648 numjobs=1 00:34:48.648 00:34:48.648 [job0] 00:34:48.648 filename=/dev/nvme0n1 00:34:48.648 [job1] 00:34:48.648 filename=/dev/nvme0n2 00:34:48.648 [job2] 00:34:48.648 filename=/dev/nvme0n3 00:34:48.648 [job3] 00:34:48.648 filename=/dev/nvme0n4 00:34:48.648 Could not set queue depth (nvme0n1) 00:34:48.648 Could not set queue depth (nvme0n2) 00:34:48.648 Could not set queue depth (nvme0n3) 00:34:48.648 Could not set queue depth (nvme0n4) 00:34:48.908 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:48.908 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:48.908 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:48.908 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:48.908 fio-3.35 00:34:48.908 Starting 4 threads 00:34:51.457 14:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:51.718 14:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:51.718 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:34:51.718 fio: pid=2686385, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:51.978 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=8687616, buflen=4096 00:34:51.978 fio: pid=2686384, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:51.978 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:51.978 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:52.239 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:52.239 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:52.239 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=307200, buflen=4096 00:34:52.239 fio: pid=2686382, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:52.239 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=18001920, buflen=4096 00:34:52.239 fio: pid=2686383, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:52.239 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:52.239 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:52.239 00:34:52.239 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2686382: Wed Nov 6 14:16:38 2024 00:34:52.239 read: IOPS=25, BW=100KiB/s (103kB/s)(300KiB/2986msec) 00:34:52.239 slat (usec): min=17, max=6844, avg=116.40, stdev=782.06 00:34:52.239 clat (usec): min=829, max=42011, avg=39403.67, stdev=7917.96 00:34:52.239 lat (usec): min=856, max=48172, avg=39520.71, stdev=7978.27 00:34:52.239 clat percentiles (usec): 00:34:52.239 | 1.00th=[ 832], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:34:52.239 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:52.239 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:52.239 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:52.239 | 99.99th=[42206] 00:34:52.239 bw ( KiB/s): min= 96, max= 112, per=1.19%, avg=100.80, stdev= 7.16, samples=5 00:34:52.239 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:34:52.239 lat (usec) : 1000=3.95% 00:34:52.239 lat (msec) : 50=94.74% 00:34:52.239 cpu : usr=0.17%, sys=0.00%, ctx=78, majf=0, minf=1 00:34:52.239 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.239 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.239 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.239 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:52.239 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2686383: Wed Nov 6 14:16:38 2024 00:34:52.239 read: IOPS=1391, BW=5563KiB/s (5697kB/s)(17.2MiB/3160msec) 00:34:52.239 slat (usec): min=6, max=21537, avg=43.50, stdev=556.14 00:34:52.239 clat (usec): min=168, max=2184, avg=663.95, stdev=129.74 00:34:52.239 lat (usec): min=175, max=22315, avg=707.45, stdev=574.74 00:34:52.239 clat percentiles (usec): 00:34:52.239 | 1.00th=[ 258], 5.00th=[ 412], 10.00th=[ 502], 20.00th=[ 578], 00:34:52.239 | 30.00th=[ 627], 40.00th=[ 652], 50.00th=[ 685], 60.00th=[ 709], 00:34:52.239 | 70.00th=[ 742], 80.00th=[ 766], 90.00th=[ 799], 95.00th=[ 832], 00:34:52.239 | 99.00th=[ 873], 99.50th=[ 898], 99.90th=[ 1156], 99.95th=[ 1319], 00:34:52.239 | 99.99th=[ 2180] 00:34:52.239 bw ( KiB/s): min= 5200, max= 5976, per=66.80%, avg=5626.67, stdev=280.33, samples=6 00:34:52.239 iops : min= 1300, max= 1494, avg=1406.67, stdev=70.08, samples=6 00:34:52.239 lat (usec) : 250=0.77%, 500=9.12%, 750=64.38%, 1000=25.52% 00:34:52.239 lat (msec) : 2=0.16%, 4=0.02% 00:34:52.239 cpu : usr=1.61%, sys=5.57%, ctx=4402, majf=0, minf=2 00:34:52.239 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.239 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.239 issued rwts: total=4396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.239 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:52.239 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2686384: Wed Nov 6 14:16:38 2024 00:34:52.239 read: IOPS=761, BW=3043KiB/s (3116kB/s)(8484KiB/2788msec) 00:34:52.239 slat (usec): min=7, max=20402, avg=38.95, stdev=466.93 00:34:52.239 clat (usec): min=739, max=42209, avg=1256.37, stdev=3050.87 00:34:52.239 lat (usec): min=765, max=42236, avg=1295.33, stdev=3085.61 00:34:52.239 clat percentiles (usec): 00:34:52.239 | 1.00th=[ 807], 5.00th=[ 889], 10.00th=[ 922], 20.00th=[ 971], 00:34:52.239 | 30.00th=[ 996], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1045], 00:34:52.239 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:34:52.239 | 99.00th=[ 1205], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:34:52.239 | 99.99th=[42206] 00:34:52.239 bw ( KiB/s): min= 1944, max= 3776, per=36.10%, avg=3041.60, stdev=932.53, samples=5 00:34:52.239 iops : min= 486, max= 944, avg=760.40, stdev=233.13, samples=5 00:34:52.239 lat (usec) : 750=0.14%, 1000=33.32% 00:34:52.239 lat (msec) : 2=65.93%, 50=0.57% 00:34:52.239 cpu : usr=0.97%, sys=2.48%, ctx=2125, majf=0, minf=2 00:34:52.239 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.239 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.239 issued rwts: total=2122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.239 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:52.239 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2686385: Wed Nov 6 14:16:38 2024 00:34:52.239 read: IOPS=24, BW=96.1KiB/s (98.4kB/s)(252KiB/2623msec) 00:34:52.239 slat (nsec): min=25162, max=59750, avg=26042.94, stdev=4289.79 00:34:52.239 clat (usec): min=774, max=42164, avg=41248.23, stdev=5187.21 00:34:52.239 lat (usec): min=833, max=42190, avg=41274.29, stdev=5182.90 00:34:52.239 clat percentiles (usec): 00:34:52.239 | 1.00th=[ 775], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:34:52.240 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:52.240 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:52.240 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:52.240 | 99.99th=[42206] 00:34:52.240 bw ( KiB/s): min= 96, max= 96, per=1.14%, avg=96.00, stdev= 0.00, samples=5 00:34:52.240 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:34:52.240 lat (usec) : 1000=1.56% 00:34:52.240 lat (msec) : 50=96.88% 00:34:52.240 cpu : usr=0.11%, sys=0.00%, ctx=65, majf=0, minf=2 00:34:52.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.240 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.240 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:52.240 00:34:52.240 Run status group 0 (all jobs): 00:34:52.240 READ: bw=8423KiB/s (8625kB/s), 96.1KiB/s-5563KiB/s (98.4kB/s-5697kB/s), io=26.0MiB (27.3MB), run=2623-3160msec 00:34:52.240 00:34:52.240 Disk stats (read/write): 00:34:52.240 nvme0n1: ios=72/0, merge=0/0, ticks=2835/0, in_queue=2835, util=94.79% 00:34:52.240 nvme0n2: ios=4337/0, merge=0/0, ticks=2454/0, in_queue=2454, util=93.22% 00:34:52.240 nvme0n3: ios=1986/0, merge=0/0, ticks=2465/0, in_queue=2465, util=96.07% 00:34:52.240 nvme0n4: ios=62/0, merge=0/0, ticks=2559/0, in_queue=2559, util=96.43% 00:34:52.500 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:52.500 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:52.760 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:52.760 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:52.760 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:52.760 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:53.021 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:53.021 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:53.283 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:53.283 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2686197 00:34:53.283 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:53.283 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:53.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:53.283 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:53.283 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:34:53.283 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:53.283 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:53.283 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:53.283 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:53.283 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:34:53.283 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:53.283 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:53.283 nvmf hotplug test: fio failed as expected 00:34:53.283 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:53.543 rmmod nvme_tcp 00:34:53.543 rmmod nvme_fabrics 00:34:53.543 rmmod nvme_keyring 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2682945 ']' 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2682945 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 2682945 ']' 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 2682945 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2682945 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2682945' 00:34:53.543 killing process with pid 2682945 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 2682945 00:34:53.543 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 2682945 00:34:53.803 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:53.803 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:53.803 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:53.803 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:53.803 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:53.803 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:53.803 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:53.803 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:53.803 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:53.803 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.803 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.804 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.716 14:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:55.976 00:34:55.976 real 0m28.410s 00:34:55.976 user 2m11.313s 00:34:55.976 sys 0m12.470s 00:34:55.976 14:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:55.976 14:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:55.976 ************************************ 00:34:55.976 END TEST nvmf_fio_target 00:34:55.976 ************************************ 00:34:55.976 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:55.976 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:55.976 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:55.976 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:55.976 ************************************ 00:34:55.976 START TEST nvmf_bdevio 00:34:55.976 ************************************ 00:34:55.976 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:55.976 * Looking for test storage... 00:34:55.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:55.976 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:55.976 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:34:55.976 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:56.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.238 --rc genhtml_branch_coverage=1 00:34:56.238 --rc genhtml_function_coverage=1 00:34:56.238 --rc genhtml_legend=1 00:34:56.238 --rc geninfo_all_blocks=1 00:34:56.238 --rc geninfo_unexecuted_blocks=1 00:34:56.238 00:34:56.238 ' 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:56.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.238 --rc genhtml_branch_coverage=1 00:34:56.238 --rc genhtml_function_coverage=1 00:34:56.238 --rc genhtml_legend=1 00:34:56.238 --rc geninfo_all_blocks=1 00:34:56.238 --rc geninfo_unexecuted_blocks=1 00:34:56.238 00:34:56.238 ' 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:56.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.238 --rc genhtml_branch_coverage=1 00:34:56.238 --rc genhtml_function_coverage=1 00:34:56.238 --rc genhtml_legend=1 00:34:56.238 --rc geninfo_all_blocks=1 00:34:56.238 --rc geninfo_unexecuted_blocks=1 00:34:56.238 00:34:56.238 ' 00:34:56.238 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:56.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.238 --rc genhtml_branch_coverage=1 00:34:56.239 --rc genhtml_function_coverage=1 00:34:56.239 --rc genhtml_legend=1 00:34:56.239 --rc geninfo_all_blocks=1 00:34:56.239 --rc geninfo_unexecuted_blocks=1 00:34:56.239 00:34:56.239 ' 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:56.239 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:04.390 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:04.390 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:04.391 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:04.391 Found net devices under 0000:31:00.0: cvl_0_0 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:04.391 Found net devices under 0000:31:00.1: cvl_0_1 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:04.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:04.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:35:04.391 00:35:04.391 --- 10.0.0.2 ping statistics --- 00:35:04.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:04.391 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:04.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:04.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:35:04.391 00:35:04.391 --- 10.0.0.1 ping statistics --- 00:35:04.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:04.391 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2691444 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2691444 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 2691444 ']' 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:04.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:04.391 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.391 [2024-11-06 14:16:49.919495] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:04.391 [2024-11-06 14:16:49.920664] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:35:04.391 [2024-11-06 14:16:49.920717] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:04.391 [2024-11-06 14:16:50.022592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:04.391 [2024-11-06 14:16:50.077624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:04.391 [2024-11-06 14:16:50.077675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:04.391 [2024-11-06 14:16:50.077684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:04.392 [2024-11-06 14:16:50.077693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:04.392 [2024-11-06 14:16:50.077699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:04.392 [2024-11-06 14:16:50.079708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:04.392 [2024-11-06 14:16:50.079851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:04.392 [2024-11-06 14:16:50.080178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:04.392 [2024-11-06 14:16:50.080182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:04.392 [2024-11-06 14:16:50.171553] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:04.392 [2024-11-06 14:16:50.172636] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:04.392 [2024-11-06 14:16:50.172929] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:04.392 [2024-11-06 14:16:50.173434] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:04.392 [2024-11-06 14:16:50.173473] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.654 [2024-11-06 14:16:50.773240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.654 Malloc0 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.654 [2024-11-06 14:16:50.865391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:04.654 { 00:35:04.654 "params": { 00:35:04.654 "name": "Nvme$subsystem", 00:35:04.654 "trtype": "$TEST_TRANSPORT", 00:35:04.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:04.654 "adrfam": "ipv4", 00:35:04.654 "trsvcid": "$NVMF_PORT", 00:35:04.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:04.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:04.654 "hdgst": ${hdgst:-false}, 00:35:04.654 "ddgst": ${ddgst:-false} 00:35:04.654 }, 00:35:04.654 "method": "bdev_nvme_attach_controller" 00:35:04.654 } 00:35:04.654 EOF 00:35:04.654 )") 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:04.654 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:04.654 "params": { 00:35:04.654 "name": "Nvme1", 00:35:04.654 "trtype": "tcp", 00:35:04.654 "traddr": "10.0.0.2", 00:35:04.654 "adrfam": "ipv4", 00:35:04.654 "trsvcid": "4420", 00:35:04.654 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:04.654 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:04.654 "hdgst": false, 00:35:04.654 "ddgst": false 00:35:04.654 }, 00:35:04.654 "method": "bdev_nvme_attach_controller" 00:35:04.654 }' 00:35:04.654 [2024-11-06 14:16:50.924413] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:35:04.654 [2024-11-06 14:16:50.924495] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2691763 ] 00:35:04.916 [2024-11-06 14:16:51.018842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:04.916 [2024-11-06 14:16:51.075317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:04.916 [2024-11-06 14:16:51.075469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.916 [2024-11-06 14:16:51.075469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:05.177 I/O targets: 00:35:05.177 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:05.177 00:35:05.177 00:35:05.177 CUnit - A unit testing framework for C - Version 2.1-3 00:35:05.177 http://cunit.sourceforge.net/ 00:35:05.177 00:35:05.177 00:35:05.177 Suite: bdevio tests on: Nvme1n1 00:35:05.177 Test: blockdev write read block ...passed 00:35:05.177 Test: blockdev write zeroes read block ...passed 00:35:05.177 Test: blockdev write zeroes read no split ...passed 00:35:05.438 Test: blockdev write zeroes read split ...passed 00:35:05.438 Test: blockdev write zeroes read split partial ...passed 00:35:05.438 Test: blockdev reset ...[2024-11-06 14:16:51.491677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:05.438 [2024-11-06 14:16:51.491782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e91c0 (9): Bad file descriptor 00:35:05.438 [2024-11-06 14:16:51.495842] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:05.438 passed 00:35:05.438 Test: blockdev write read 8 blocks ...passed 00:35:05.438 Test: blockdev write read size > 128k ...passed 00:35:05.438 Test: blockdev write read invalid size ...passed 00:35:05.438 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:05.438 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:05.438 Test: blockdev write read max offset ...passed 00:35:05.438 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:05.438 Test: blockdev writev readv 8 blocks ...passed 00:35:05.438 Test: blockdev writev readv 30 x 1block ...passed 00:35:05.438 Test: blockdev writev readv block ...passed 00:35:05.438 Test: blockdev writev readv size > 128k ...passed 00:35:05.438 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:05.438 Test: blockdev comparev and writev ...[2024-11-06 14:16:51.712819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.438 [2024-11-06 14:16:51.712870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.438 [2024-11-06 14:16:51.712893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.438 [2024-11-06 14:16:51.712903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.438 [2024-11-06 14:16:51.713271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.438 [2024-11-06 14:16:51.713284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:05.438 [2024-11-06 14:16:51.713298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.438 [2024-11-06 14:16:51.713307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:05.438 [2024-11-06 14:16:51.713661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.438 [2024-11-06 14:16:51.713674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:05.438 [2024-11-06 14:16:51.713688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.439 [2024-11-06 14:16:51.713695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:05.439 [2024-11-06 14:16:51.714063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.439 [2024-11-06 14:16:51.714076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:05.439 [2024-11-06 14:16:51.714090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.439 [2024-11-06 14:16:51.714098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:05.699 passed 00:35:05.699 Test: blockdev nvme passthru rw ...passed 00:35:05.700 Test: blockdev nvme passthru vendor specific ...[2024-11-06 14:16:51.797018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:05.700 [2024-11-06 14:16:51.797038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:05.700 [2024-11-06 14:16:51.797156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:05.700 [2024-11-06 14:16:51.797167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:05.700 [2024-11-06 14:16:51.797287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:05.700 [2024-11-06 14:16:51.797299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:05.700 [2024-11-06 14:16:51.797416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:05.700 [2024-11-06 14:16:51.797427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:05.700 passed 00:35:05.700 Test: blockdev nvme admin passthru ...passed 00:35:05.700 Test: blockdev copy ...passed 00:35:05.700 00:35:05.700 Run Summary: Type Total Ran Passed Failed Inactive 00:35:05.700 suites 1 1 n/a 0 0 00:35:05.700 tests 23 23 23 0 0 00:35:05.700 asserts 152 152 152 0 n/a 00:35:05.700 00:35:05.700 Elapsed time = 1.000 seconds 00:35:05.962 14:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:05.962 14:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.962 14:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.962 14:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.962 14:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:05.962 14:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:05.962 14:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:05.962 14:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:05.962 14:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:05.962 14:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:05.962 14:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:05.962 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:05.962 rmmod nvme_tcp 00:35:05.962 rmmod nvme_fabrics 00:35:05.962 rmmod nvme_keyring 00:35:05.962 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:05.962 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:05.962 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:05.962 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2691444 ']' 00:35:05.962 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2691444 00:35:05.962 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 2691444 ']' 00:35:05.962 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 2691444 00:35:05.962 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:35:05.962 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:05.962 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2691444 00:35:05.962 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:35:05.962 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:35:05.962 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2691444' 00:35:05.962 killing process with pid 2691444 00:35:05.962 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 2691444 00:35:05.962 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 2691444 00:35:06.223 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:06.223 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:06.223 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:06.223 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:06.223 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:06.223 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:06.223 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:06.223 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:06.223 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:06.223 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.223 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:06.223 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:08.772 14:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:08.772 00:35:08.772 real 0m12.351s 00:35:08.772 user 0m9.750s 00:35:08.772 sys 0m6.598s 00:35:08.772 14:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:08.772 14:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:08.772 ************************************ 00:35:08.772 END TEST nvmf_bdevio 00:35:08.772 ************************************ 00:35:08.772 14:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:08.772 00:35:08.772 real 5m1.665s 00:35:08.772 user 10m8.065s 00:35:08.772 sys 2m5.162s 00:35:08.772 14:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:08.772 14:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:08.772 ************************************ 00:35:08.772 END TEST nvmf_target_core_interrupt_mode 00:35:08.772 ************************************ 00:35:08.772 14:16:54 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:08.772 14:16:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:08.772 14:16:54 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:08.772 14:16:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:08.772 ************************************ 00:35:08.772 START TEST nvmf_interrupt 00:35:08.772 ************************************ 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:08.772 * Looking for test storage... 00:35:08.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:08.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.772 --rc genhtml_branch_coverage=1 00:35:08.772 --rc genhtml_function_coverage=1 00:35:08.772 --rc genhtml_legend=1 00:35:08.772 --rc geninfo_all_blocks=1 00:35:08.772 --rc geninfo_unexecuted_blocks=1 00:35:08.772 00:35:08.772 ' 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:08.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.772 --rc genhtml_branch_coverage=1 00:35:08.772 --rc genhtml_function_coverage=1 00:35:08.772 --rc genhtml_legend=1 00:35:08.772 --rc geninfo_all_blocks=1 00:35:08.772 --rc geninfo_unexecuted_blocks=1 00:35:08.772 00:35:08.772 ' 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:08.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.772 --rc genhtml_branch_coverage=1 00:35:08.772 --rc genhtml_function_coverage=1 00:35:08.772 --rc genhtml_legend=1 00:35:08.772 --rc geninfo_all_blocks=1 00:35:08.772 --rc geninfo_unexecuted_blocks=1 00:35:08.772 00:35:08.772 ' 00:35:08.772 14:16:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:08.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.772 --rc genhtml_branch_coverage=1 00:35:08.772 --rc genhtml_function_coverage=1 00:35:08.772 --rc genhtml_legend=1 00:35:08.772 --rc geninfo_all_blocks=1 00:35:08.772 --rc geninfo_unexecuted_blocks=1 00:35:08.772 00:35:08.773 ' 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:08.773 14:16:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:16.919 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:16.919 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:16.919 Found net devices under 0000:31:00.0: cvl_0_0 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:16.919 Found net devices under 0000:31:00.1: cvl_0_1 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:16.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:16.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:35:16.919 00:35:16.919 --- 10.0.0.2 ping statistics --- 00:35:16.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.919 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:16.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:16.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:35:16.919 00:35:16.919 --- 10.0.0.1 ping statistics --- 00:35:16.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.919 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2696176 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2696176 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 2696176 ']' 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:16.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:16.919 14:17:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.919 [2024-11-06 14:17:02.463365] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:16.919 [2024-11-06 14:17:02.464491] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:35:16.919 [2024-11-06 14:17:02.464546] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:16.919 [2024-11-06 14:17:02.566523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:16.919 [2024-11-06 14:17:02.618177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:16.919 [2024-11-06 14:17:02.618228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:16.919 [2024-11-06 14:17:02.618237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:16.919 [2024-11-06 14:17:02.618245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:16.919 [2024-11-06 14:17:02.618251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:16.919 [2024-11-06 14:17:02.620032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.919 [2024-11-06 14:17:02.620165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.919 [2024-11-06 14:17:02.697824] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:16.919 [2024-11-06 14:17:02.698361] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:16.919 [2024-11-06 14:17:02.698687] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:17.180 5000+0 records in 00:35:17.180 5000+0 records out 00:35:17.180 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0189505 s, 540 MB/s 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.180 AIO0 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.180 [2024-11-06 14:17:03.417106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.180 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.441 [2024-11-06 14:17:03.461800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:17.441 14:17:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.441 14:17:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:17.441 14:17:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2696176 0 00:35:17.441 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2696176 0 idle 00:35:17.441 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2696176 00:35:17.441 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:17.441 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:17.441 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:17.441 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2696176 -w 256 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2696176 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.32 reactor_0' 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2696176 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.32 reactor_0 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2696176 1 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2696176 1 idle 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2696176 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2696176 -w 256 00:35:17.442 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2696181 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2696181 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2696515 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2696176 0 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2696176 0 busy 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2696176 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2696176 -w 256 00:35:17.704 14:17:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2696176 root 20 0 128.2g 44928 32256 R 53.3 0.0 0:00.40 reactor_0' 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2696176 root 20 0 128.2g 44928 32256 R 53.3 0.0 0:00.40 reactor_0 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=53.3 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=53 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2696176 1 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2696176 1 busy 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2696176 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2696176 -w 256 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2696181 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:00.22 reactor_1' 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2696181 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:00.22 reactor_1 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:17.965 14:17:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2696515 00:35:27.971 Initializing NVMe Controllers 00:35:27.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:27.971 Controller IO queue size 256, less than required. 00:35:27.971 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:27.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:27.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:27.971 Initialization complete. Launching workers. 00:35:27.971 ======================================================== 00:35:27.971 Latency(us) 00:35:27.971 Device Information : IOPS MiB/s Average min max 00:35:27.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19606.98 76.59 13060.71 4244.53 33518.83 00:35:27.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19953.28 77.94 12831.54 8169.59 30359.52 00:35:27.971 ======================================================== 00:35:27.971 Total : 39560.27 154.53 12945.12 4244.53 33518.83 00:35:27.971 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2696176 0 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2696176 0 idle 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2696176 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2696176 -w 256 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2696176 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.30 reactor_0' 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2696176 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.30 reactor_0 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2696176 1 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2696176 1 idle 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2696176 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2696176 -w 256 00:35:27.971 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:28.233 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2696181 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:35:28.233 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2696181 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:35:28.233 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:28.233 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:28.233 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:28.233 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:28.233 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:28.233 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:28.233 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:28.233 14:17:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:28.233 14:17:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:28.805 14:17:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:28.805 14:17:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:35:28.805 14:17:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:35:28.805 14:17:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:35:28.805 14:17:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2696176 0 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2696176 0 idle 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2696176 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2696176 -w 256 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2696176 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.70 reactor_0' 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2696176 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.70 reactor_0 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2696176 1 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2696176 1 idle 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2696176 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2696176 -w 256 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2696181 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1' 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2696181 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:31.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:31.353 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:31.353 rmmod nvme_tcp 00:35:31.614 rmmod nvme_fabrics 00:35:31.615 rmmod nvme_keyring 00:35:31.615 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:31.615 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:31.615 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:31.615 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2696176 ']' 00:35:31.615 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2696176 00:35:31.615 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 2696176 ']' 00:35:31.615 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 2696176 00:35:31.615 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:35:31.615 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:31.615 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2696176 00:35:31.615 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:31.615 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:31.615 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2696176' 00:35:31.615 killing process with pid 2696176 00:35:31.615 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 2696176 00:35:31.615 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 2696176 00:35:31.876 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:31.876 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:31.876 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:31.876 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:31.876 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:31.876 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:31.876 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:31.876 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:31.876 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:31.876 14:17:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.876 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:31.876 14:17:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:33.789 14:17:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:33.789 00:35:33.789 real 0m25.460s 00:35:33.789 user 0m40.297s 00:35:33.789 sys 0m9.901s 00:35:33.789 14:17:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:33.789 14:17:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:33.789 ************************************ 00:35:33.789 END TEST nvmf_interrupt 00:35:33.789 ************************************ 00:35:33.789 00:35:33.789 real 30m22.212s 00:35:33.789 user 61m39.515s 00:35:33.789 sys 10m22.610s 00:35:33.789 14:17:20 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:33.789 14:17:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:33.789 ************************************ 00:35:33.789 END TEST nvmf_tcp 00:35:33.789 ************************************ 00:35:34.049 14:17:20 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:35:34.049 14:17:20 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:34.049 14:17:20 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:34.049 14:17:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:34.049 14:17:20 -- common/autotest_common.sh@10 -- # set +x 00:35:34.049 ************************************ 00:35:34.049 START TEST spdkcli_nvmf_tcp 00:35:34.049 ************************************ 00:35:34.049 14:17:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:34.049 * Looking for test storage... 00:35:34.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:34.049 14:17:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:34.049 14:17:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:35:34.049 14:17:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:34.049 14:17:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:34.049 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:34.049 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:34.049 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:34.049 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:34.049 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:34.049 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:34.049 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:34.310 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:34.310 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:34.310 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:34.310 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:34.310 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:34.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.311 --rc genhtml_branch_coverage=1 00:35:34.311 --rc genhtml_function_coverage=1 00:35:34.311 --rc genhtml_legend=1 00:35:34.311 --rc geninfo_all_blocks=1 00:35:34.311 --rc geninfo_unexecuted_blocks=1 00:35:34.311 00:35:34.311 ' 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:34.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.311 --rc genhtml_branch_coverage=1 00:35:34.311 --rc genhtml_function_coverage=1 00:35:34.311 --rc genhtml_legend=1 00:35:34.311 --rc geninfo_all_blocks=1 00:35:34.311 --rc geninfo_unexecuted_blocks=1 00:35:34.311 00:35:34.311 ' 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:34.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.311 --rc genhtml_branch_coverage=1 00:35:34.311 --rc genhtml_function_coverage=1 00:35:34.311 --rc genhtml_legend=1 00:35:34.311 --rc geninfo_all_blocks=1 00:35:34.311 --rc geninfo_unexecuted_blocks=1 00:35:34.311 00:35:34.311 ' 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:34.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.311 --rc genhtml_branch_coverage=1 00:35:34.311 --rc genhtml_function_coverage=1 00:35:34.311 --rc genhtml_legend=1 00:35:34.311 --rc geninfo_all_blocks=1 00:35:34.311 --rc geninfo_unexecuted_blocks=1 00:35:34.311 00:35:34.311 ' 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:34.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2699720 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2699720 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 2699720 ']' 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:34.311 14:17:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:34.311 [2024-11-06 14:17:20.440741] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:35:34.311 [2024-11-06 14:17:20.440833] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2699720 ] 00:35:34.311 [2024-11-06 14:17:20.534827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:34.573 [2024-11-06 14:17:20.588317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.573 [2024-11-06 14:17:20.588322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.144 14:17:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:35.144 14:17:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:35:35.144 14:17:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:35.144 14:17:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:35.144 14:17:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:35.144 14:17:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:35.145 14:17:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:35.145 14:17:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:35.145 14:17:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:35.145 14:17:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:35.145 14:17:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:35.145 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:35.145 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:35.145 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:35.145 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:35.145 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:35.145 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:35.145 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:35.145 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:35.145 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:35.145 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:35.145 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:35.145 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:35.145 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:35.145 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:35.145 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:35.145 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:35.145 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:35.145 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:35.145 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:35.145 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:35.145 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:35.145 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:35.145 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:35.145 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:35.145 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:35.145 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:35.145 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:35.145 ' 00:35:38.449 [2024-11-06 14:17:24.050993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:39.406 [2024-11-06 14:17:25.411208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:42.040 [2024-11-06 14:17:27.930259] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:43.950 [2024-11-06 14:17:30.160679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:45.862 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:45.862 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:45.862 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:45.862 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:45.862 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:45.862 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:45.862 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:45.862 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:45.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:45.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:45.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:45.862 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:45.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:45.862 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:45.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:45.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:45.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:45.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:45.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:45.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:45.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:45.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:45.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:45.862 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:45.862 14:17:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:45.862 14:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:45.862 14:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.862 14:17:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:45.862 14:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:45.862 14:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.862 14:17:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:45.862 14:17:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:46.122 14:17:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:46.383 14:17:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:46.383 14:17:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:46.383 14:17:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:46.383 14:17:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:46.383 14:17:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:46.383 14:17:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:46.383 14:17:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:46.383 14:17:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:46.383 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:46.383 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:46.383 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:46.383 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:46.383 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:46.383 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:46.383 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:46.383 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:46.383 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:46.383 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:46.383 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:46.383 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:46.383 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:46.383 ' 00:35:52.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:52.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:52.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:52.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:52.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:52.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:52.968 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:52.968 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:52.968 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:52.968 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:52.968 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:52.968 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:52.968 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:52.968 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2699720 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 2699720 ']' 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 2699720 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2699720 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2699720' 00:35:52.968 killing process with pid 2699720 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 2699720 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 2699720 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2699720 ']' 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2699720 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 2699720 ']' 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 2699720 00:35:52.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2699720) - No such process 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 2699720 is not found' 00:35:52.968 Process with pid 2699720 is not found 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:52.968 00:35:52.968 real 0m18.199s 00:35:52.968 user 0m40.389s 00:35:52.968 sys 0m0.917s 00:35:52.968 14:17:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:52.969 14:17:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.969 ************************************ 00:35:52.969 END TEST spdkcli_nvmf_tcp 00:35:52.969 ************************************ 00:35:52.969 14:17:38 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:52.969 14:17:38 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:52.969 14:17:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:52.969 14:17:38 -- common/autotest_common.sh@10 -- # set +x 00:35:52.969 ************************************ 00:35:52.969 START TEST nvmf_identify_passthru 00:35:52.969 ************************************ 00:35:52.969 14:17:38 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:52.969 * Looking for test storage... 00:35:52.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:52.969 14:17:38 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:52.969 14:17:38 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:35:52.969 14:17:38 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:52.969 14:17:38 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:52.969 14:17:38 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:52.969 14:17:38 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:52.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.969 --rc genhtml_branch_coverage=1 00:35:52.969 --rc genhtml_function_coverage=1 00:35:52.969 --rc genhtml_legend=1 00:35:52.969 --rc geninfo_all_blocks=1 00:35:52.969 --rc geninfo_unexecuted_blocks=1 00:35:52.969 00:35:52.969 ' 00:35:52.969 14:17:38 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:52.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.969 --rc genhtml_branch_coverage=1 00:35:52.969 --rc genhtml_function_coverage=1 00:35:52.969 --rc genhtml_legend=1 00:35:52.969 --rc geninfo_all_blocks=1 00:35:52.969 --rc geninfo_unexecuted_blocks=1 00:35:52.969 00:35:52.969 ' 00:35:52.969 14:17:38 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:52.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.969 --rc genhtml_branch_coverage=1 00:35:52.969 --rc genhtml_function_coverage=1 00:35:52.969 --rc genhtml_legend=1 00:35:52.969 --rc geninfo_all_blocks=1 00:35:52.969 --rc geninfo_unexecuted_blocks=1 00:35:52.969 00:35:52.969 ' 00:35:52.969 14:17:38 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:52.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.969 --rc genhtml_branch_coverage=1 00:35:52.969 --rc genhtml_function_coverage=1 00:35:52.969 --rc genhtml_legend=1 00:35:52.969 --rc geninfo_all_blocks=1 00:35:52.969 --rc geninfo_unexecuted_blocks=1 00:35:52.969 00:35:52.969 ' 00:35:52.969 14:17:38 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:52.969 14:17:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.969 14:17:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.969 14:17:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.969 14:17:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:52.969 14:17:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:52.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:52.969 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:52.969 14:17:38 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:52.969 14:17:38 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:52.970 14:17:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.970 14:17:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.970 14:17:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.970 14:17:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:52.970 14:17:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.970 14:17:38 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:52.970 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:52.970 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:52.970 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:52.970 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:52.970 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:52.970 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:52.970 14:17:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:52.970 14:17:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.970 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:52.970 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:52.970 14:17:38 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:52.970 14:17:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:01.115 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:01.115 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:01.115 Found net devices under 0000:31:00.0: cvl_0_0 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:01.115 Found net devices under 0000:31:00.1: cvl_0_1 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:01.115 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:01.116 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:01.116 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:01.116 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:01.116 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:01.116 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:01.116 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:01.116 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:01.116 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:01.116 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:01.116 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:01.116 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:01.116 14:17:45 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:01.116 14:17:46 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:01.116 14:17:46 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:01.116 14:17:46 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:01.116 14:17:46 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:01.116 14:17:46 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:01.116 14:17:46 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:01.116 14:17:46 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:01.116 14:17:46 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:01.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:01.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:36:01.116 00:36:01.116 --- 10.0.0.2 ping statistics --- 00:36:01.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.116 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:36:01.116 14:17:46 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:01.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:01.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:36:01.116 00:36:01.116 --- 10.0.0.1 ping statistics --- 00:36:01.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.116 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:36:01.116 14:17:46 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:01.116 14:17:46 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:01.116 14:17:46 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:01.116 14:17:46 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:01.116 14:17:46 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:01.116 14:17:46 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:01.116 14:17:46 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:01.116 14:17:46 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:01.116 14:17:46 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:01.116 14:17:46 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:01.116 14:17:46 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:01.116 14:17:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.116 14:17:46 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:01.116 14:17:46 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:36:01.116 14:17:46 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:36:01.116 14:17:46 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:36:01.116 14:17:46 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:36:01.116 14:17:46 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:36:01.116 14:17:46 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:36:01.116 14:17:46 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:01.116 14:17:46 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:01.116 14:17:46 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:36:01.116 14:17:46 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:36:01.116 14:17:46 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:36:01.116 14:17:46 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:36:01.116 14:17:46 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:01.116 14:17:46 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:01.116 14:17:46 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:01.116 14:17:46 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:01.116 14:17:46 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:01.116 14:17:46 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605500 00:36:01.116 14:17:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:01.116 14:17:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:01.116 14:17:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:01.116 14:17:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:01.116 14:17:47 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:01.116 14:17:47 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:01.116 14:17:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.116 14:17:47 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:01.116 14:17:47 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:01.116 14:17:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.377 14:17:47 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2707026 00:36:01.377 14:17:47 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:01.377 14:17:47 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:01.377 14:17:47 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2707026 00:36:01.377 14:17:47 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 2707026 ']' 00:36:01.377 14:17:47 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.377 14:17:47 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:01.377 14:17:47 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.377 14:17:47 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:01.377 14:17:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.377 [2024-11-06 14:17:47.453924] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:36:01.377 [2024-11-06 14:17:47.453993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:01.377 [2024-11-06 14:17:47.554027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:01.377 [2024-11-06 14:17:47.608618] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:01.377 [2024-11-06 14:17:47.608673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:01.377 [2024-11-06 14:17:47.608682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:01.377 [2024-11-06 14:17:47.608690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:01.377 [2024-11-06 14:17:47.608696] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:01.377 [2024-11-06 14:17:47.611043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:01.377 [2024-11-06 14:17:47.611204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:01.377 [2024-11-06 14:17:47.611351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:01.377 [2024-11-06 14:17:47.611352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:02.318 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:02.318 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:36:02.318 14:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:02.319 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.319 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.319 INFO: Log level set to 20 00:36:02.319 INFO: Requests: 00:36:02.319 { 00:36:02.319 "jsonrpc": "2.0", 00:36:02.319 "method": "nvmf_set_config", 00:36:02.319 "id": 1, 00:36:02.319 "params": { 00:36:02.319 "admin_cmd_passthru": { 00:36:02.319 "identify_ctrlr": true 00:36:02.319 } 00:36:02.319 } 00:36:02.319 } 00:36:02.319 00:36:02.319 INFO: response: 00:36:02.319 { 00:36:02.319 "jsonrpc": "2.0", 00:36:02.319 "id": 1, 00:36:02.319 "result": true 00:36:02.319 } 00:36:02.319 00:36:02.319 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.319 14:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:02.319 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.319 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.319 INFO: Setting log level to 20 00:36:02.319 INFO: Setting log level to 20 00:36:02.319 INFO: Log level set to 20 00:36:02.319 INFO: Log level set to 20 00:36:02.319 INFO: Requests: 00:36:02.319 { 00:36:02.319 "jsonrpc": "2.0", 00:36:02.319 "method": "framework_start_init", 00:36:02.319 "id": 1 00:36:02.319 } 00:36:02.319 00:36:02.319 INFO: Requests: 00:36:02.319 { 00:36:02.319 "jsonrpc": "2.0", 00:36:02.319 "method": "framework_start_init", 00:36:02.319 "id": 1 00:36:02.319 } 00:36:02.319 00:36:02.319 [2024-11-06 14:17:48.379198] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:02.319 INFO: response: 00:36:02.319 { 00:36:02.319 "jsonrpc": "2.0", 00:36:02.319 "id": 1, 00:36:02.319 "result": true 00:36:02.319 } 00:36:02.319 00:36:02.319 INFO: response: 00:36:02.319 { 00:36:02.319 "jsonrpc": "2.0", 00:36:02.319 "id": 1, 00:36:02.319 "result": true 00:36:02.319 } 00:36:02.319 00:36:02.319 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.319 14:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:02.319 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.319 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.319 INFO: Setting log level to 40 00:36:02.319 INFO: Setting log level to 40 00:36:02.319 INFO: Setting log level to 40 00:36:02.319 [2024-11-06 14:17:48.392788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:02.319 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.319 14:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:02.319 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:02.319 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.319 14:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:02.319 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.319 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.580 Nvme0n1 00:36:02.580 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.580 14:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:02.580 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.580 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.580 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.580 14:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:02.580 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.580 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.580 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.580 14:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:02.580 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.580 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.580 [2024-11-06 14:17:48.787683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:02.580 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.580 14:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:02.580 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.580 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.580 [ 00:36:02.580 { 00:36:02.580 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:02.580 "subtype": "Discovery", 00:36:02.580 "listen_addresses": [], 00:36:02.580 "allow_any_host": true, 00:36:02.580 "hosts": [] 00:36:02.580 }, 00:36:02.580 { 00:36:02.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:02.580 "subtype": "NVMe", 00:36:02.580 "listen_addresses": [ 00:36:02.580 { 00:36:02.580 "trtype": "TCP", 00:36:02.580 "adrfam": "IPv4", 00:36:02.580 "traddr": "10.0.0.2", 00:36:02.580 "trsvcid": "4420" 00:36:02.580 } 00:36:02.580 ], 00:36:02.580 "allow_any_host": true, 00:36:02.580 "hosts": [], 00:36:02.580 "serial_number": "SPDK00000000000001", 00:36:02.580 "model_number": "SPDK bdev Controller", 00:36:02.580 "max_namespaces": 1, 00:36:02.580 "min_cntlid": 1, 00:36:02.580 "max_cntlid": 65519, 00:36:02.580 "namespaces": [ 00:36:02.580 { 00:36:02.580 "nsid": 1, 00:36:02.580 "bdev_name": "Nvme0n1", 00:36:02.580 "name": "Nvme0n1", 00:36:02.580 "nguid": "36344730526055000025384500000031", 00:36:02.580 "uuid": "36344730-5260-5500-0025-384500000031" 00:36:02.580 } 00:36:02.580 ] 00:36:02.580 } 00:36:02.580 ] 00:36:02.580 14:17:48 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.580 14:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:02.580 14:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:02.580 14:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:02.841 14:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605500 00:36:02.841 14:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:02.841 14:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:02.841 14:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:03.102 14:17:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:36:03.102 14:17:49 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605500 '!=' S64GNE0R605500 ']' 00:36:03.102 14:17:49 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:36:03.102 14:17:49 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:03.102 14:17:49 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.102 14:17:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:03.102 14:17:49 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.102 14:17:49 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:03.102 14:17:49 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:03.102 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:03.102 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:03.102 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:03.102 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:03.102 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:03.102 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:03.102 rmmod nvme_tcp 00:36:03.102 rmmod nvme_fabrics 00:36:03.102 rmmod nvme_keyring 00:36:03.102 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:03.102 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:03.102 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:03.102 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2707026 ']' 00:36:03.102 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2707026 00:36:03.102 14:17:49 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 2707026 ']' 00:36:03.102 14:17:49 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 2707026 00:36:03.102 14:17:49 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:36:03.102 14:17:49 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:03.102 14:17:49 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2707026 00:36:03.102 14:17:49 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:03.102 14:17:49 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:03.102 14:17:49 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2707026' 00:36:03.102 killing process with pid 2707026 00:36:03.102 14:17:49 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 2707026 00:36:03.102 14:17:49 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 2707026 00:36:03.363 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:03.363 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:03.363 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:03.363 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:03.363 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:03.363 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:03.363 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:03.363 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:03.363 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:03.363 14:17:49 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:03.363 14:17:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:03.363 14:17:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:05.906 14:17:51 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:05.906 00:36:05.906 real 0m13.267s 00:36:05.906 user 0m10.230s 00:36:05.906 sys 0m6.780s 00:36:05.906 14:17:51 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:05.906 14:17:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:05.906 ************************************ 00:36:05.906 END TEST nvmf_identify_passthru 00:36:05.906 ************************************ 00:36:05.906 14:17:51 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:05.906 14:17:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:05.906 14:17:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:05.906 14:17:51 -- common/autotest_common.sh@10 -- # set +x 00:36:05.906 ************************************ 00:36:05.906 START TEST nvmf_dif 00:36:05.906 ************************************ 00:36:05.906 14:17:51 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:05.906 * Looking for test storage... 00:36:05.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:05.906 14:17:51 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:05.906 14:17:51 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:36:05.906 14:17:51 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:05.906 14:17:51 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:05.906 14:17:51 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:05.906 14:17:51 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:05.906 14:17:51 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:05.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.906 --rc genhtml_branch_coverage=1 00:36:05.906 --rc genhtml_function_coverage=1 00:36:05.906 --rc genhtml_legend=1 00:36:05.906 --rc geninfo_all_blocks=1 00:36:05.906 --rc geninfo_unexecuted_blocks=1 00:36:05.906 00:36:05.906 ' 00:36:05.906 14:17:51 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:05.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.906 --rc genhtml_branch_coverage=1 00:36:05.906 --rc genhtml_function_coverage=1 00:36:05.906 --rc genhtml_legend=1 00:36:05.906 --rc geninfo_all_blocks=1 00:36:05.906 --rc geninfo_unexecuted_blocks=1 00:36:05.906 00:36:05.906 ' 00:36:05.906 14:17:51 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:05.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.906 --rc genhtml_branch_coverage=1 00:36:05.906 --rc genhtml_function_coverage=1 00:36:05.906 --rc genhtml_legend=1 00:36:05.906 --rc geninfo_all_blocks=1 00:36:05.906 --rc geninfo_unexecuted_blocks=1 00:36:05.906 00:36:05.906 ' 00:36:05.906 14:17:51 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:05.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.906 --rc genhtml_branch_coverage=1 00:36:05.906 --rc genhtml_function_coverage=1 00:36:05.906 --rc genhtml_legend=1 00:36:05.906 --rc geninfo_all_blocks=1 00:36:05.906 --rc geninfo_unexecuted_blocks=1 00:36:05.906 00:36:05.906 ' 00:36:05.906 14:17:51 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:05.906 14:17:51 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:05.906 14:17:51 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:05.906 14:17:51 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:05.906 14:17:51 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:05.906 14:17:51 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:05.907 14:17:51 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:05.907 14:17:51 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:05.907 14:17:51 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:05.907 14:17:51 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:05.907 14:17:51 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.907 14:17:51 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.907 14:17:51 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.907 14:17:51 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:05.907 14:17:51 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:05.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:05.907 14:17:51 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:05.907 14:17:52 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:05.907 14:17:52 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:05.907 14:17:52 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:05.907 14:17:52 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:05.907 14:17:52 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:05.907 14:17:52 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:05.907 14:17:52 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:05.907 14:17:52 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:05.907 14:17:52 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:05.907 14:17:52 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:05.907 14:17:52 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:05.907 14:17:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:05.907 14:17:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:05.907 14:17:52 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:05.907 14:17:52 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:05.907 14:17:52 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:05.907 14:17:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:14.045 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:14.045 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:14.045 Found net devices under 0000:31:00.0: cvl_0_0 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:14.045 Found net devices under 0000:31:00.1: cvl_0_1 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:14.045 14:17:59 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:14.046 14:17:59 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:14.046 14:17:59 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:14.046 14:17:59 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:14.046 14:17:59 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:14.046 14:17:59 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:14.046 14:17:59 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:14.046 14:17:59 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:14.046 14:17:59 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:14.046 14:17:59 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:14.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:14.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:36:14.046 00:36:14.046 --- 10.0.0.2 ping statistics --- 00:36:14.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.046 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:36:14.046 14:17:59 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:14.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:14.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:36:14.046 00:36:14.046 --- 10.0.0.1 ping statistics --- 00:36:14.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.046 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:36:14.046 14:17:59 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:14.046 14:17:59 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:14.046 14:17:59 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:14.046 14:17:59 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:17.351 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:17.351 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:17.351 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:17.351 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:17.351 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:17.351 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:17.351 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:17.351 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:17.351 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:17.351 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:17.351 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:17.351 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:17.351 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:17.351 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:17.351 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:17.351 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:17.351 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:17.351 14:18:03 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:17.351 14:18:03 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:17.351 14:18:03 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:17.351 14:18:03 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:17.351 14:18:03 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:17.351 14:18:03 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:17.351 14:18:03 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:17.351 14:18:03 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:17.351 14:18:03 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:17.351 14:18:03 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:17.351 14:18:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:17.351 14:18:03 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2713180 00:36:17.351 14:18:03 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2713180 00:36:17.351 14:18:03 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:17.351 14:18:03 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 2713180 ']' 00:36:17.351 14:18:03 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:17.351 14:18:03 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:17.351 14:18:03 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:17.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:17.351 14:18:03 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:17.351 14:18:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:17.351 [2024-11-06 14:18:03.524151] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:36:17.351 [2024-11-06 14:18:03.524218] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:17.351 [2024-11-06 14:18:03.623143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:17.612 [2024-11-06 14:18:03.674438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:17.612 [2024-11-06 14:18:03.674487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:17.612 [2024-11-06 14:18:03.674496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:17.612 [2024-11-06 14:18:03.674503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:17.612 [2024-11-06 14:18:03.674510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:17.612 [2024-11-06 14:18:03.675345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:18.184 14:18:04 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:18.184 14:18:04 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:36:18.184 14:18:04 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:18.184 14:18:04 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:18.184 14:18:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:18.184 14:18:04 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:18.184 14:18:04 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:18.184 14:18:04 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:18.184 14:18:04 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.184 14:18:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:18.184 [2024-11-06 14:18:04.387629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:18.184 14:18:04 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.184 14:18:04 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:18.184 14:18:04 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:18.184 14:18:04 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:18.184 14:18:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:18.184 ************************************ 00:36:18.184 START TEST fio_dif_1_default 00:36:18.184 ************************************ 00:36:18.184 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:36:18.184 14:18:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:18.184 14:18:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:18.184 14:18:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:18.184 14:18:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:18.184 14:18:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:18.184 14:18:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:18.184 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.184 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:18.184 bdev_null0 00:36:18.184 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.184 14:18:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:18.184 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.184 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:18.184 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.184 14:18:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:18.184 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.184 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:18.445 [2024-11-06 14:18:04.476078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:18.445 { 00:36:18.445 "params": { 00:36:18.445 "name": "Nvme$subsystem", 00:36:18.445 "trtype": "$TEST_TRANSPORT", 00:36:18.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:18.445 "adrfam": "ipv4", 00:36:18.445 "trsvcid": "$NVMF_PORT", 00:36:18.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:18.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:18.445 "hdgst": ${hdgst:-false}, 00:36:18.445 "ddgst": ${ddgst:-false} 00:36:18.445 }, 00:36:18.445 "method": "bdev_nvme_attach_controller" 00:36:18.445 } 00:36:18.445 EOF 00:36:18.445 )") 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:18.445 "params": { 00:36:18.445 "name": "Nvme0", 00:36:18.445 "trtype": "tcp", 00:36:18.445 "traddr": "10.0.0.2", 00:36:18.445 "adrfam": "ipv4", 00:36:18.445 "trsvcid": "4420", 00:36:18.445 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:18.445 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:18.445 "hdgst": false, 00:36:18.445 "ddgst": false 00:36:18.445 }, 00:36:18.445 "method": "bdev_nvme_attach_controller" 00:36:18.445 }' 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:18.445 14:18:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:18.706 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:18.706 fio-3.35 00:36:18.706 Starting 1 thread 00:36:30.940 00:36:30.940 filename0: (groupid=0, jobs=1): err= 0: pid=2713713: Wed Nov 6 14:18:15 2024 00:36:30.940 read: IOPS=290, BW=1160KiB/s (1188kB/s)(11.4MiB/10040msec) 00:36:30.940 slat (nsec): min=5486, max=88937, avg=6976.85, stdev=2145.12 00:36:30.940 clat (usec): min=536, max=43493, avg=13771.73, stdev=18801.76 00:36:30.940 lat (usec): min=541, max=43538, avg=13778.71, stdev=18801.28 00:36:30.940 clat percentiles (usec): 00:36:30.940 | 1.00th=[ 627], 5.00th=[ 783], 10.00th=[ 816], 20.00th=[ 848], 00:36:30.940 | 30.00th=[ 922], 40.00th=[ 971], 50.00th=[ 996], 60.00th=[ 1020], 00:36:30.940 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:36:30.940 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:36:30.940 | 99.99th=[43254] 00:36:30.940 bw ( KiB/s): min= 704, max= 4224, per=100.00%, avg=1163.20, stdev=917.24, samples=20 00:36:30.940 iops : min= 176, max= 1056, avg=290.80, stdev=229.31, samples=20 00:36:30.940 lat (usec) : 750=3.23%, 1000=50.31% 00:36:30.940 lat (msec) : 2=14.59%, 50=31.87% 00:36:30.940 cpu : usr=93.66%, sys=6.06%, ctx=16, majf=0, minf=209 00:36:30.940 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.940 issued rwts: total=2912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.940 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:30.940 00:36:30.940 Run status group 0 (all jobs): 00:36:30.940 READ: bw=1160KiB/s (1188kB/s), 1160KiB/s-1160KiB/s (1188kB/s-1188kB/s), io=11.4MiB (11.9MB), run=10040-10040msec 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.940 00:36:30.940 real 0m11.239s 00:36:30.940 user 0m16.616s 00:36:30.940 sys 0m1.084s 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:30.940 ************************************ 00:36:30.940 END TEST fio_dif_1_default 00:36:30.940 ************************************ 00:36:30.940 14:18:15 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:30.940 14:18:15 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:30.940 14:18:15 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:30.940 14:18:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:30.940 ************************************ 00:36:30.940 START TEST fio_dif_1_multi_subsystems 00:36:30.940 ************************************ 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:30.940 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.941 bdev_null0 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.941 [2024-11-06 14:18:15.796250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.941 bdev_null1 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:30.941 { 00:36:30.941 "params": { 00:36:30.941 "name": "Nvme$subsystem", 00:36:30.941 "trtype": "$TEST_TRANSPORT", 00:36:30.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:30.941 "adrfam": "ipv4", 00:36:30.941 "trsvcid": "$NVMF_PORT", 00:36:30.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:30.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:30.941 "hdgst": ${hdgst:-false}, 00:36:30.941 "ddgst": ${ddgst:-false} 00:36:30.941 }, 00:36:30.941 "method": "bdev_nvme_attach_controller" 00:36:30.941 } 00:36:30.941 EOF 00:36:30.941 )") 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:30.941 { 00:36:30.941 "params": { 00:36:30.941 "name": "Nvme$subsystem", 00:36:30.941 "trtype": "$TEST_TRANSPORT", 00:36:30.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:30.941 "adrfam": "ipv4", 00:36:30.941 "trsvcid": "$NVMF_PORT", 00:36:30.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:30.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:30.941 "hdgst": ${hdgst:-false}, 00:36:30.941 "ddgst": ${ddgst:-false} 00:36:30.941 }, 00:36:30.941 "method": "bdev_nvme_attach_controller" 00:36:30.941 } 00:36:30.941 EOF 00:36:30.941 )") 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:30.941 "params": { 00:36:30.941 "name": "Nvme0", 00:36:30.941 "trtype": "tcp", 00:36:30.941 "traddr": "10.0.0.2", 00:36:30.941 "adrfam": "ipv4", 00:36:30.941 "trsvcid": "4420", 00:36:30.941 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:30.941 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:30.941 "hdgst": false, 00:36:30.941 "ddgst": false 00:36:30.941 }, 00:36:30.941 "method": "bdev_nvme_attach_controller" 00:36:30.941 },{ 00:36:30.941 "params": { 00:36:30.941 "name": "Nvme1", 00:36:30.941 "trtype": "tcp", 00:36:30.941 "traddr": "10.0.0.2", 00:36:30.941 "adrfam": "ipv4", 00:36:30.941 "trsvcid": "4420", 00:36:30.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:30.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:30.941 "hdgst": false, 00:36:30.941 "ddgst": false 00:36:30.941 }, 00:36:30.941 "method": "bdev_nvme_attach_controller" 00:36:30.941 }' 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:30.941 14:18:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:30.941 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:30.941 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:30.941 fio-3.35 00:36:30.941 Starting 2 threads 00:36:40.939 00:36:40.939 filename0: (groupid=0, jobs=1): err= 0: pid=2716546: Wed Nov 6 14:18:27 2024 00:36:40.939 read: IOPS=97, BW=391KiB/s (400kB/s)(3920KiB/10029msec) 00:36:40.939 slat (nsec): min=5502, max=45932, avg=6981.80, stdev=3463.09 00:36:40.939 clat (usec): min=599, max=42459, avg=40913.25, stdev=2598.90 00:36:40.939 lat (usec): min=605, max=42492, avg=40920.24, stdev=2599.19 00:36:40.939 clat percentiles (usec): 00:36:40.939 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:40.939 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:40.939 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:36:40.939 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:40.939 | 99.99th=[42206] 00:36:40.939 bw ( KiB/s): min= 352, max= 416, per=34.00%, avg=390.40, stdev=16.74, samples=20 00:36:40.939 iops : min= 88, max= 104, avg=97.60, stdev= 4.19, samples=20 00:36:40.939 lat (usec) : 750=0.41% 00:36:40.939 lat (msec) : 50=99.59% 00:36:40.939 cpu : usr=95.48%, sys=4.32%, ctx=14, majf=0, minf=209 00:36:40.939 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:40.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.939 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:40.939 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:40.939 filename1: (groupid=0, jobs=1): err= 0: pid=2716547: Wed Nov 6 14:18:27 2024 00:36:40.939 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10002msec) 00:36:40.939 slat (nsec): min=5486, max=35916, avg=6544.99, stdev=2432.97 00:36:40.939 clat (usec): min=522, max=43065, avg=21082.55, stdev=20173.04 00:36:40.939 lat (usec): min=528, max=43071, avg=21089.09, stdev=20172.84 00:36:40.939 clat percentiles (usec): 00:36:40.939 | 1.00th=[ 635], 5.00th=[ 734], 10.00th=[ 766], 20.00th=[ 824], 00:36:40.939 | 30.00th=[ 857], 40.00th=[ 881], 50.00th=[40633], 60.00th=[41157], 00:36:40.939 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:40.939 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:36:40.939 | 99.99th=[43254] 00:36:40.939 bw ( KiB/s): min= 672, max= 768, per=66.17%, avg=759.58, stdev=25.78, samples=19 00:36:40.939 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:36:40.939 lat (usec) : 750=7.28%, 1000=40.61% 00:36:40.939 lat (msec) : 2=1.90%, 50=50.21% 00:36:40.939 cpu : usr=95.84%, sys=3.96%, ctx=10, majf=0, minf=118 00:36:40.939 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:40.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.939 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:40.939 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:40.939 00:36:40.939 Run status group 0 (all jobs): 00:36:40.939 READ: bw=1147KiB/s (1175kB/s), 391KiB/s-758KiB/s (400kB/s-776kB/s), io=11.2MiB (11.8MB), run=10002-10029msec 00:36:40.939 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:40.939 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:40.939 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:40.939 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:40.939 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:40.939 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:40.939 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.939 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.201 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.201 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:41.201 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.201 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.201 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.201 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:41.201 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:41.201 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:41.201 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:41.201 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.201 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.201 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.201 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:41.201 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.201 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.201 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.201 00:36:41.201 real 0m11.502s 00:36:41.201 user 0m35.720s 00:36:41.201 sys 0m1.232s 00:36:41.201 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:41.201 14:18:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.201 ************************************ 00:36:41.201 END TEST fio_dif_1_multi_subsystems 00:36:41.201 ************************************ 00:36:41.201 14:18:27 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:41.201 14:18:27 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:41.201 14:18:27 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:41.201 14:18:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:41.201 ************************************ 00:36:41.201 START TEST fio_dif_rand_params 00:36:41.201 ************************************ 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:41.201 bdev_null0 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:41.201 [2024-11-06 14:18:27.383450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:41.201 { 00:36:41.201 "params": { 00:36:41.201 "name": "Nvme$subsystem", 00:36:41.201 "trtype": "$TEST_TRANSPORT", 00:36:41.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:41.201 "adrfam": "ipv4", 00:36:41.201 "trsvcid": "$NVMF_PORT", 00:36:41.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:41.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:41.201 "hdgst": ${hdgst:-false}, 00:36:41.201 "ddgst": ${ddgst:-false} 00:36:41.201 }, 00:36:41.201 "method": "bdev_nvme_attach_controller" 00:36:41.201 } 00:36:41.201 EOF 00:36:41.201 )") 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:41.201 "params": { 00:36:41.201 "name": "Nvme0", 00:36:41.201 "trtype": "tcp", 00:36:41.201 "traddr": "10.0.0.2", 00:36:41.201 "adrfam": "ipv4", 00:36:41.201 "trsvcid": "4420", 00:36:41.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:41.201 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:41.201 "hdgst": false, 00:36:41.201 "ddgst": false 00:36:41.201 }, 00:36:41.201 "method": "bdev_nvme_attach_controller" 00:36:41.201 }' 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:41.201 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:41.202 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:41.202 14:18:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:41.806 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:41.806 ... 00:36:41.806 fio-3.35 00:36:41.806 Starting 3 threads 00:36:48.384 00:36:48.384 filename0: (groupid=0, jobs=1): err= 0: pid=2718889: Wed Nov 6 14:18:33 2024 00:36:48.384 read: IOPS=307, BW=38.5MiB/s (40.3MB/s)(194MiB/5045msec) 00:36:48.384 slat (nsec): min=5724, max=46155, avg=8364.79, stdev=2033.26 00:36:48.384 clat (usec): min=4546, max=89475, avg=9707.48, stdev=7611.33 00:36:48.384 lat (usec): min=4552, max=89483, avg=9715.85, stdev=7611.57 00:36:48.384 clat percentiles (usec): 00:36:48.384 | 1.00th=[ 5342], 5.00th=[ 6259], 10.00th=[ 6783], 20.00th=[ 7373], 00:36:48.384 | 30.00th=[ 7832], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 9110], 00:36:48.384 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10159], 95.00th=[10945], 00:36:48.384 | 99.00th=[48497], 99.50th=[49021], 99.90th=[88605], 99.95th=[89654], 00:36:48.384 | 99.99th=[89654] 00:36:48.384 bw ( KiB/s): min=24832, max=51200, per=33.36%, avg=39705.60, stdev=9024.74, samples=10 00:36:48.384 iops : min= 194, max= 400, avg=310.20, stdev=70.51, samples=10 00:36:48.384 lat (msec) : 10=87.25%, 20=10.05%, 50=2.25%, 100=0.45% 00:36:48.384 cpu : usr=94.49%, sys=5.27%, ctx=7, majf=0, minf=127 00:36:48.384 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:48.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.384 issued rwts: total=1553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:48.384 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:48.384 filename0: (groupid=0, jobs=1): err= 0: pid=2718890: Wed Nov 6 14:18:33 2024 00:36:48.384 read: IOPS=307, BW=38.5MiB/s (40.4MB/s)(194MiB/5044msec) 00:36:48.384 slat (nsec): min=5714, max=31472, avg=8466.71, stdev=1980.12 00:36:48.384 clat (usec): min=4755, max=87961, avg=9706.21, stdev=5908.83 00:36:48.384 lat (usec): min=4764, max=87970, avg=9714.68, stdev=5908.91 00:36:48.385 clat percentiles (usec): 00:36:48.385 | 1.00th=[ 5932], 5.00th=[ 6849], 10.00th=[ 7373], 20.00th=[ 8029], 00:36:48.385 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:36:48.385 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10552], 95.00th=[10945], 00:36:48.385 | 99.00th=[46400], 99.50th=[49021], 99.90th=[87557], 99.95th=[87557], 00:36:48.385 | 99.99th=[87557] 00:36:48.385 bw ( KiB/s): min=21248, max=45824, per=33.36%, avg=39705.60, stdev=7009.24, samples=10 00:36:48.385 iops : min= 166, max= 358, avg=310.20, stdev=54.76, samples=10 00:36:48.385 lat (msec) : 10=79.72%, 20=18.61%, 50=1.42%, 100=0.26% 00:36:48.385 cpu : usr=95.08%, sys=4.68%, ctx=6, majf=0, minf=93 00:36:48.385 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:48.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.385 issued rwts: total=1553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:48.385 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:48.385 filename0: (groupid=0, jobs=1): err= 0: pid=2718891: Wed Nov 6 14:18:33 2024 00:36:48.385 read: IOPS=314, BW=39.3MiB/s (41.2MB/s)(198MiB/5046msec) 00:36:48.385 slat (nsec): min=5556, max=34835, avg=8535.91, stdev=1584.15 00:36:48.385 clat (usec): min=5249, max=87498, avg=9507.08, stdev=5405.39 00:36:48.385 lat (usec): min=5257, max=87507, avg=9515.62, stdev=5405.57 00:36:48.385 clat percentiles (usec): 00:36:48.385 | 1.00th=[ 5407], 5.00th=[ 6456], 10.00th=[ 7177], 20.00th=[ 7767], 00:36:48.385 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:36:48.385 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10814], 00:36:48.385 | 99.00th=[48497], 99.50th=[49546], 99.90th=[53740], 99.95th=[87557], 00:36:48.385 | 99.99th=[87557] 00:36:48.385 bw ( KiB/s): min=33024, max=46080, per=34.07%, avg=40550.40, stdev=4201.66, samples=10 00:36:48.385 iops : min= 258, max= 360, avg=316.80, stdev=32.83, samples=10 00:36:48.385 lat (msec) : 10=81.78%, 20=16.65%, 50=1.26%, 100=0.32% 00:36:48.385 cpu : usr=94.67%, sys=5.09%, ctx=11, majf=0, minf=86 00:36:48.385 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:48.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.385 issued rwts: total=1586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:48.385 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:48.385 00:36:48.385 Run status group 0 (all jobs): 00:36:48.385 READ: bw=116MiB/s (122MB/s), 38.5MiB/s-39.3MiB/s (40.3MB/s-41.2MB/s), io=587MiB (615MB), run=5044-5046msec 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.385 bdev_null0 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.385 [2024-11-06 14:18:33.720332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.385 bdev_null1 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.385 bdev_null2 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:48.385 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:48.386 { 00:36:48.386 "params": { 00:36:48.386 "name": "Nvme$subsystem", 00:36:48.386 "trtype": "$TEST_TRANSPORT", 00:36:48.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:48.386 "adrfam": "ipv4", 00:36:48.386 "trsvcid": "$NVMF_PORT", 00:36:48.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:48.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:48.386 "hdgst": ${hdgst:-false}, 00:36:48.386 "ddgst": ${ddgst:-false} 00:36:48.386 }, 00:36:48.386 "method": "bdev_nvme_attach_controller" 00:36:48.386 } 00:36:48.386 EOF 00:36:48.386 )") 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:48.386 { 00:36:48.386 "params": { 00:36:48.386 "name": "Nvme$subsystem", 00:36:48.386 "trtype": "$TEST_TRANSPORT", 00:36:48.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:48.386 "adrfam": "ipv4", 00:36:48.386 "trsvcid": "$NVMF_PORT", 00:36:48.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:48.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:48.386 "hdgst": ${hdgst:-false}, 00:36:48.386 "ddgst": ${ddgst:-false} 00:36:48.386 }, 00:36:48.386 "method": "bdev_nvme_attach_controller" 00:36:48.386 } 00:36:48.386 EOF 00:36:48.386 )") 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:48.386 { 00:36:48.386 "params": { 00:36:48.386 "name": "Nvme$subsystem", 00:36:48.386 "trtype": "$TEST_TRANSPORT", 00:36:48.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:48.386 "adrfam": "ipv4", 00:36:48.386 "trsvcid": "$NVMF_PORT", 00:36:48.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:48.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:48.386 "hdgst": ${hdgst:-false}, 00:36:48.386 "ddgst": ${ddgst:-false} 00:36:48.386 }, 00:36:48.386 "method": "bdev_nvme_attach_controller" 00:36:48.386 } 00:36:48.386 EOF 00:36:48.386 )") 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:48.386 "params": { 00:36:48.386 "name": "Nvme0", 00:36:48.386 "trtype": "tcp", 00:36:48.386 "traddr": "10.0.0.2", 00:36:48.386 "adrfam": "ipv4", 00:36:48.386 "trsvcid": "4420", 00:36:48.386 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:48.386 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:48.386 "hdgst": false, 00:36:48.386 "ddgst": false 00:36:48.386 }, 00:36:48.386 "method": "bdev_nvme_attach_controller" 00:36:48.386 },{ 00:36:48.386 "params": { 00:36:48.386 "name": "Nvme1", 00:36:48.386 "trtype": "tcp", 00:36:48.386 "traddr": "10.0.0.2", 00:36:48.386 "adrfam": "ipv4", 00:36:48.386 "trsvcid": "4420", 00:36:48.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:48.386 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:48.386 "hdgst": false, 00:36:48.386 "ddgst": false 00:36:48.386 }, 00:36:48.386 "method": "bdev_nvme_attach_controller" 00:36:48.386 },{ 00:36:48.386 "params": { 00:36:48.386 "name": "Nvme2", 00:36:48.386 "trtype": "tcp", 00:36:48.386 "traddr": "10.0.0.2", 00:36:48.386 "adrfam": "ipv4", 00:36:48.386 "trsvcid": "4420", 00:36:48.386 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:48.386 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:48.386 "hdgst": false, 00:36:48.386 "ddgst": false 00:36:48.386 }, 00:36:48.386 "method": "bdev_nvme_attach_controller" 00:36:48.386 }' 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:48.386 14:18:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:48.386 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:48.386 ... 00:36:48.386 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:48.386 ... 00:36:48.386 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:48.386 ... 00:36:48.386 fio-3.35 00:36:48.386 Starting 24 threads 00:37:00.616 00:37:00.616 filename0: (groupid=0, jobs=1): err= 0: pid=2720275: Wed Nov 6 14:18:45 2024 00:37:00.616 read: IOPS=684, BW=2736KiB/s (2802kB/s)(26.7MiB/10006msec) 00:37:00.616 slat (nsec): min=5670, max=76071, avg=10482.35, stdev=7479.73 00:37:00.616 clat (usec): min=1978, max=30173, avg=23298.36, stdev=2905.21 00:37:00.616 lat (usec): min=1995, max=30182, avg=23308.85, stdev=2903.96 00:37:00.616 clat percentiles (usec): 00:37:00.616 | 1.00th=[ 4293], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.616 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.616 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:37:00.616 | 99.00th=[24511], 99.50th=[24773], 99.90th=[25560], 99.95th=[25560], 00:37:00.616 | 99.99th=[30278] 00:37:00.616 bw ( KiB/s): min= 2682, max= 3688, per=4.24%, avg=2740.00, stdev=229.58, samples=19 00:37:00.616 iops : min= 670, max= 922, avg=684.95, stdev=57.41, samples=19 00:37:00.616 lat (msec) : 2=0.03%, 4=0.89%, 10=1.11%, 20=1.23%, 50=96.74% 00:37:00.616 cpu : usr=98.89%, sys=0.83%, ctx=9, majf=0, minf=26 00:37:00.616 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:00.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.616 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.616 issued rwts: total=6845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.616 filename0: (groupid=0, jobs=1): err= 0: pid=2720276: Wed Nov 6 14:18:45 2024 00:37:00.616 read: IOPS=671, BW=2686KiB/s (2750kB/s)(26.2MiB/10009msec) 00:37:00.616 slat (nsec): min=5751, max=65573, avg=16608.45, stdev=10249.83 00:37:00.616 clat (usec): min=9274, max=36511, avg=23692.31, stdev=1116.04 00:37:00.616 lat (usec): min=9280, max=36528, avg=23708.92, stdev=1116.17 00:37:00.616 clat percentiles (usec): 00:37:00.616 | 1.00th=[19530], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.616 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.616 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:37:00.616 | 99.00th=[24773], 99.50th=[28443], 99.90th=[30540], 99.95th=[31851], 00:37:00.616 | 99.99th=[36439] 00:37:00.616 bw ( KiB/s): min= 2560, max= 2704, per=4.15%, avg=2680.63, stdev=29.75, samples=19 00:37:00.616 iops : min= 640, max= 676, avg=670.11, stdev= 7.44, samples=19 00:37:00.616 lat (msec) : 10=0.24%, 20=0.83%, 50=98.93% 00:37:00.616 cpu : usr=99.04%, sys=0.68%, ctx=13, majf=0, minf=14 00:37:00.616 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:37:00.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.616 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.616 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.616 filename0: (groupid=0, jobs=1): err= 0: pid=2720277: Wed Nov 6 14:18:45 2024 00:37:00.616 read: IOPS=669, BW=2680KiB/s (2744kB/s)(26.2MiB/10006msec) 00:37:00.616 slat (nsec): min=5669, max=65256, avg=18031.32, stdev=10387.55 00:37:00.616 clat (usec): min=17872, max=33908, avg=23708.94, stdev=453.34 00:37:00.616 lat (usec): min=17881, max=33926, avg=23726.97, stdev=453.41 00:37:00.616 clat percentiles (usec): 00:37:00.616 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.616 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.616 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:37:00.616 | 99.00th=[24773], 99.50th=[24773], 99.90th=[28181], 99.95th=[28181], 00:37:00.616 | 99.99th=[33817] 00:37:00.616 bw ( KiB/s): min= 2565, max= 2688, per=4.15%, avg=2681.21, stdev=28.18, samples=19 00:37:00.616 iops : min= 641, max= 672, avg=670.26, stdev= 7.10, samples=19 00:37:00.616 lat (msec) : 20=0.03%, 50=99.97% 00:37:00.616 cpu : usr=98.82%, sys=0.78%, ctx=68, majf=0, minf=17 00:37:00.616 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:00.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.616 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.616 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.616 filename0: (groupid=0, jobs=1): err= 0: pid=2720278: Wed Nov 6 14:18:45 2024 00:37:00.616 read: IOPS=671, BW=2684KiB/s (2749kB/s)(26.2MiB/10003msec) 00:37:00.616 slat (nsec): min=5714, max=64625, avg=17778.82, stdev=10384.90 00:37:00.616 clat (usec): min=2698, max=45217, avg=23675.47, stdev=1620.78 00:37:00.616 lat (usec): min=2704, max=45238, avg=23693.25, stdev=1621.07 00:37:00.616 clat percentiles (usec): 00:37:00.616 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.616 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.616 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:37:00.616 | 99.00th=[24511], 99.50th=[26608], 99.90th=[45351], 99.95th=[45351], 00:37:00.616 | 99.99th=[45351] 00:37:00.616 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2674.79, stdev=58.81, samples=19 00:37:00.616 iops : min= 640, max= 704, avg=668.68, stdev=14.70, samples=19 00:37:00.616 lat (msec) : 4=0.13%, 10=0.21%, 20=0.63%, 50=99.03% 00:37:00.616 cpu : usr=98.37%, sys=1.10%, ctx=116, majf=0, minf=27 00:37:00.616 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:00.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.616 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.616 issued rwts: total=6713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.616 filename0: (groupid=0, jobs=1): err= 0: pid=2720280: Wed Nov 6 14:18:45 2024 00:37:00.616 read: IOPS=679, BW=2719KiB/s (2784kB/s)(26.6MiB/10003msec) 00:37:00.616 slat (nsec): min=5675, max=80594, avg=12228.94, stdev=9107.79 00:37:00.616 clat (usec): min=2340, max=31405, avg=23438.24, stdev=2460.04 00:37:00.616 lat (usec): min=2358, max=31411, avg=23450.47, stdev=2458.96 00:37:00.616 clat percentiles (usec): 00:37:00.616 | 1.00th=[ 5407], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.616 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.616 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:37:00.616 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25822], 99.95th=[25822], 00:37:00.616 | 99.99th=[31327] 00:37:00.616 bw ( KiB/s): min= 2560, max= 3456, per=4.21%, avg=2721.95, stdev=180.18, samples=19 00:37:00.616 iops : min= 640, max= 864, avg=680.47, stdev=45.05, samples=19 00:37:00.616 lat (msec) : 4=0.47%, 10=0.96%, 20=0.72%, 50=97.85% 00:37:00.616 cpu : usr=98.33%, sys=1.08%, ctx=187, majf=0, minf=23 00:37:00.616 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:00.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.616 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.616 issued rwts: total=6800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.616 filename0: (groupid=0, jobs=1): err= 0: pid=2720281: Wed Nov 6 14:18:45 2024 00:37:00.616 read: IOPS=671, BW=2686KiB/s (2751kB/s)(26.2MiB/10007msec) 00:37:00.616 slat (nsec): min=5680, max=98022, avg=22658.38, stdev=15029.59 00:37:00.616 clat (usec): min=10989, max=31337, avg=23627.51, stdev=985.79 00:37:00.616 lat (usec): min=11016, max=31346, avg=23650.17, stdev=985.26 00:37:00.616 clat percentiles (usec): 00:37:00.616 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:37:00.616 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.616 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:37:00.616 | 99.00th=[24511], 99.50th=[24773], 99.90th=[31065], 99.95th=[31327], 00:37:00.616 | 99.99th=[31327] 00:37:00.616 bw ( KiB/s): min= 2560, max= 2816, per=4.16%, avg=2688.00, stdev=42.67, samples=19 00:37:00.617 iops : min= 640, max= 704, avg=672.00, stdev=10.67, samples=19 00:37:00.617 lat (msec) : 20=0.71%, 50=99.29% 00:37:00.617 cpu : usr=98.41%, sys=1.05%, ctx=108, majf=0, minf=18 00:37:00.617 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:00.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.617 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.617 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.617 filename0: (groupid=0, jobs=1): err= 0: pid=2720282: Wed Nov 6 14:18:45 2024 00:37:00.617 read: IOPS=670, BW=2682KiB/s (2746kB/s)(26.2MiB/10015msec) 00:37:00.617 slat (nsec): min=5732, max=60892, avg=17835.32, stdev=10340.50 00:37:00.617 clat (usec): min=13072, max=34148, avg=23692.30, stdev=659.57 00:37:00.617 lat (usec): min=13080, max=34158, avg=23710.13, stdev=659.67 00:37:00.617 clat percentiles (usec): 00:37:00.617 | 1.00th=[22152], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.617 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.617 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:37:00.617 | 99.00th=[24773], 99.50th=[24773], 99.90th=[27919], 99.95th=[33817], 00:37:00.617 | 99.99th=[34341] 00:37:00.617 bw ( KiB/s): min= 2560, max= 2816, per=4.15%, avg=2680.63, stdev=52.02, samples=19 00:37:00.617 iops : min= 640, max= 704, avg=670.11, stdev=13.00, samples=19 00:37:00.617 lat (msec) : 20=0.36%, 50=99.64% 00:37:00.617 cpu : usr=98.98%, sys=0.76%, ctx=13, majf=0, minf=21 00:37:00.617 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:00.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.617 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.617 issued rwts: total=6714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.617 filename0: (groupid=0, jobs=1): err= 0: pid=2720283: Wed Nov 6 14:18:45 2024 00:37:00.617 read: IOPS=675, BW=2704KiB/s (2769kB/s)(26.4MiB/10004msec) 00:37:00.617 slat (nsec): min=5665, max=96250, avg=23712.15, stdev=14462.56 00:37:00.617 clat (usec): min=8761, max=39458, avg=23454.26, stdev=2191.59 00:37:00.617 lat (usec): min=8774, max=39483, avg=23477.97, stdev=2192.85 00:37:00.617 clat percentiles (usec): 00:37:00.617 | 1.00th=[13173], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:37:00.617 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:00.617 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:37:00.617 | 99.00th=[30016], 99.50th=[34866], 99.90th=[39060], 99.95th=[39060], 00:37:00.617 | 99.99th=[39584] 00:37:00.617 bw ( KiB/s): min= 2640, max= 2864, per=4.19%, avg=2705.37, stdev=59.00, samples=19 00:37:00.617 iops : min= 660, max= 716, avg=676.32, stdev=14.76, samples=19 00:37:00.617 lat (msec) : 10=0.13%, 20=3.62%, 50=96.24% 00:37:00.617 cpu : usr=98.66%, sys=0.89%, ctx=145, majf=0, minf=23 00:37:00.617 IO depths : 1=5.7%, 2=11.6%, 4=23.9%, 8=51.9%, 16=6.8%, 32=0.0%, >=64=0.0% 00:37:00.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.617 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.617 issued rwts: total=6762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.617 filename1: (groupid=0, jobs=1): err= 0: pid=2720284: Wed Nov 6 14:18:45 2024 00:37:00.617 read: IOPS=670, BW=2681KiB/s (2745kB/s)(26.2MiB/10004msec) 00:37:00.617 slat (nsec): min=5707, max=67837, avg=21109.45, stdev=12090.90 00:37:00.617 clat (usec): min=9524, max=46472, avg=23670.35, stdev=1565.35 00:37:00.617 lat (usec): min=9538, max=46488, avg=23691.46, stdev=1565.73 00:37:00.617 clat percentiles (usec): 00:37:00.617 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.617 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:00.617 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:37:00.617 | 99.00th=[24511], 99.50th=[30802], 99.90th=[46400], 99.95th=[46400], 00:37:00.617 | 99.99th=[46400] 00:37:00.617 bw ( KiB/s): min= 2432, max= 2688, per=4.13%, avg=2667.47, stdev=64.10, samples=19 00:37:00.617 iops : min= 608, max= 672, avg=666.84, stdev=16.02, samples=19 00:37:00.617 lat (msec) : 10=0.21%, 20=0.54%, 50=99.25% 00:37:00.617 cpu : usr=98.53%, sys=0.94%, ctx=168, majf=0, minf=20 00:37:00.617 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:00.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.617 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.617 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.617 filename1: (groupid=0, jobs=1): err= 0: pid=2720285: Wed Nov 6 14:18:45 2024 00:37:00.617 read: IOPS=670, BW=2682KiB/s (2747kB/s)(26.2MiB/10006msec) 00:37:00.617 slat (nsec): min=5664, max=85557, avg=21448.38, stdev=12641.03 00:37:00.617 clat (usec): min=11801, max=37948, avg=23648.73, stdev=1042.36 00:37:00.617 lat (usec): min=11829, max=37985, avg=23670.17, stdev=1043.34 00:37:00.617 clat percentiles (usec): 00:37:00.617 | 1.00th=[22152], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.617 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:00.617 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:37:00.617 | 99.00th=[24773], 99.50th=[28181], 99.90th=[38011], 99.95th=[38011], 00:37:00.617 | 99.99th=[38011] 00:37:00.617 bw ( KiB/s): min= 2560, max= 2736, per=4.14%, avg=2677.00, stdev=41.87, samples=19 00:37:00.617 iops : min= 640, max= 684, avg=669.21, stdev=10.50, samples=19 00:37:00.617 lat (msec) : 20=0.63%, 50=99.37% 00:37:00.617 cpu : usr=97.52%, sys=1.59%, ctx=682, majf=0, minf=18 00:37:00.617 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:00.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.617 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.617 issued rwts: total=6710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.617 filename1: (groupid=0, jobs=1): err= 0: pid=2720286: Wed Nov 6 14:18:45 2024 00:37:00.617 read: IOPS=690, BW=2763KiB/s (2829kB/s)(27.0MiB/10016msec) 00:37:00.617 slat (nsec): min=5651, max=82394, avg=12051.44, stdev=9942.38 00:37:00.617 clat (usec): min=10120, max=39785, avg=23081.82, stdev=3429.40 00:37:00.617 lat (usec): min=10127, max=39791, avg=23093.87, stdev=3430.16 00:37:00.617 clat percentiles (usec): 00:37:00.617 | 1.00th=[14746], 5.00th=[16057], 10.00th=[18482], 20.00th=[20579], 00:37:00.617 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:00.617 | 70.00th=[23987], 80.00th=[24249], 90.00th=[26608], 95.00th=[28705], 00:37:00.617 | 99.00th=[32637], 99.50th=[35390], 99.90th=[39060], 99.95th=[39584], 00:37:00.617 | 99.99th=[39584] 00:37:00.617 bw ( KiB/s): min= 2576, max= 2976, per=4.28%, avg=2767.68, stdev=103.28, samples=19 00:37:00.617 iops : min= 644, max= 744, avg=691.89, stdev=25.80, samples=19 00:37:00.617 lat (msec) : 20=15.05%, 50=84.95% 00:37:00.617 cpu : usr=98.89%, sys=0.85%, ctx=13, majf=0, minf=23 00:37:00.617 IO depths : 1=1.4%, 2=2.7%, 4=7.5%, 8=74.6%, 16=13.8%, 32=0.0%, >=64=0.0% 00:37:00.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.617 complete : 0=0.0%, 4=89.9%, 8=7.1%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.617 issued rwts: total=6918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.617 filename1: (groupid=0, jobs=1): err= 0: pid=2720287: Wed Nov 6 14:18:45 2024 00:37:00.617 read: IOPS=675, BW=2703KiB/s (2767kB/s)(26.4MiB/10011msec) 00:37:00.617 slat (nsec): min=5756, max=86812, avg=20172.98, stdev=12523.20 00:37:00.617 clat (usec): min=7135, max=28100, avg=23514.42, stdev=1505.20 00:37:00.617 lat (usec): min=7144, max=28110, avg=23534.59, stdev=1505.40 00:37:00.617 clat percentiles (usec): 00:37:00.617 | 1.00th=[15139], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.617 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.617 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:37:00.617 | 99.00th=[24511], 99.50th=[24773], 99.90th=[26870], 99.95th=[28181], 00:37:00.617 | 99.99th=[28181] 00:37:00.617 bw ( KiB/s): min= 2682, max= 3040, per=4.19%, avg=2706.21, stdev=80.84, samples=19 00:37:00.617 iops : min= 670, max= 760, avg=676.53, stdev=20.22, samples=19 00:37:00.617 lat (msec) : 10=0.40%, 20=1.61%, 50=97.99% 00:37:00.617 cpu : usr=98.91%, sys=0.81%, ctx=11, majf=0, minf=20 00:37:00.617 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:00.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.617 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.617 issued rwts: total=6764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.617 filename1: (groupid=0, jobs=1): err= 0: pid=2720288: Wed Nov 6 14:18:45 2024 00:37:00.617 read: IOPS=670, BW=2682KiB/s (2746kB/s)(26.2MiB/10005msec) 00:37:00.617 slat (nsec): min=5531, max=60528, avg=12791.47, stdev=8640.21 00:37:00.617 clat (usec): min=6874, max=68302, avg=23792.99, stdev=3137.01 00:37:00.617 lat (usec): min=6880, max=68323, avg=23805.78, stdev=3137.23 00:37:00.617 clat percentiles (usec): 00:37:00.617 | 1.00th=[17171], 5.00th=[19006], 10.00th=[20317], 20.00th=[23462], 00:37:00.617 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.617 | 70.00th=[23987], 80.00th=[24249], 90.00th=[26870], 95.00th=[28967], 00:37:00.617 | 99.00th=[34341], 99.50th=[35914], 99.90th=[54789], 99.95th=[54789], 00:37:00.617 | 99.99th=[68682] 00:37:00.617 bw ( KiB/s): min= 2472, max= 2768, per=4.13%, avg=2670.00, stdev=73.79, samples=19 00:37:00.617 iops : min= 618, max= 692, avg=667.47, stdev=18.46, samples=19 00:37:00.617 lat (msec) : 10=0.06%, 20=8.45%, 50=91.25%, 100=0.24% 00:37:00.617 cpu : usr=98.75%, sys=0.88%, ctx=83, majf=0, minf=24 00:37:00.617 IO depths : 1=0.9%, 2=1.9%, 4=5.3%, 8=76.5%, 16=15.3%, 32=0.0%, >=64=0.0% 00:37:00.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.617 complete : 0=0.0%, 4=89.7%, 8=8.2%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.617 issued rwts: total=6708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.617 filename1: (groupid=0, jobs=1): err= 0: pid=2720289: Wed Nov 6 14:18:45 2024 00:37:00.617 read: IOPS=674, BW=2698KiB/s (2763kB/s)(26.4MiB/10010msec) 00:37:00.617 slat (nsec): min=5705, max=80595, avg=12890.69, stdev=11724.25 00:37:00.617 clat (usec): min=10187, max=27026, avg=23618.03, stdev=1321.36 00:37:00.618 lat (usec): min=10220, max=27033, avg=23630.92, stdev=1320.25 00:37:00.618 clat percentiles (usec): 00:37:00.618 | 1.00th=[15270], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.618 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.618 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:37:00.618 | 99.00th=[24511], 99.50th=[24773], 99.90th=[26870], 99.95th=[27132], 00:37:00.618 | 99.99th=[27132] 00:37:00.618 bw ( KiB/s): min= 2560, max= 2944, per=4.18%, avg=2701.47, stdev=72.59, samples=19 00:37:00.618 iops : min= 640, max= 736, avg=675.37, stdev=18.15, samples=19 00:37:00.618 lat (msec) : 20=1.66%, 50=98.34% 00:37:00.618 cpu : usr=99.03%, sys=0.70%, ctx=12, majf=0, minf=25 00:37:00.618 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:00.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.618 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.618 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.618 filename1: (groupid=0, jobs=1): err= 0: pid=2720290: Wed Nov 6 14:18:45 2024 00:37:00.618 read: IOPS=674, BW=2699KiB/s (2763kB/s)(26.4MiB/10005msec) 00:37:00.618 slat (nsec): min=5606, max=70228, avg=18674.06, stdev=11156.71 00:37:00.618 clat (usec): min=6915, max=46730, avg=23542.94, stdev=1999.24 00:37:00.618 lat (usec): min=6921, max=46751, avg=23561.62, stdev=2000.39 00:37:00.618 clat percentiles (usec): 00:37:00.618 | 1.00th=[15533], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.618 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.618 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:37:00.618 | 99.00th=[27132], 99.50th=[30016], 99.90th=[46924], 99.95th=[46924], 00:37:00.618 | 99.99th=[46924] 00:37:00.618 bw ( KiB/s): min= 2528, max= 2912, per=4.16%, avg=2686.84, stdev=72.45, samples=19 00:37:00.618 iops : min= 632, max= 728, avg=671.68, stdev=18.11, samples=19 00:37:00.618 lat (msec) : 10=0.24%, 20=3.08%, 50=96.68% 00:37:00.618 cpu : usr=98.84%, sys=0.89%, ctx=16, majf=0, minf=37 00:37:00.618 IO depths : 1=5.5%, 2=11.2%, 4=23.0%, 8=52.9%, 16=7.3%, 32=0.0%, >=64=0.0% 00:37:00.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.618 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.618 issued rwts: total=6750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.618 filename1: (groupid=0, jobs=1): err= 0: pid=2720292: Wed Nov 6 14:18:45 2024 00:37:00.618 read: IOPS=670, BW=2681KiB/s (2745kB/s)(26.2MiB/10002msec) 00:37:00.618 slat (nsec): min=5680, max=52623, avg=12130.43, stdev=8600.66 00:37:00.618 clat (usec): min=9254, max=46980, avg=23771.57, stdev=1638.71 00:37:00.618 lat (usec): min=9261, max=46999, avg=23783.70, stdev=1638.57 00:37:00.618 clat percentiles (usec): 00:37:00.618 | 1.00th=[18482], 5.00th=[23462], 10.00th=[23462], 20.00th=[23462], 00:37:00.618 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.618 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:37:00.618 | 99.00th=[26608], 99.50th=[30540], 99.90th=[46924], 99.95th=[46924], 00:37:00.618 | 99.99th=[46924] 00:37:00.618 bw ( KiB/s): min= 2432, max= 2688, per=4.14%, avg=2674.21, stdev=58.67, samples=19 00:37:00.618 iops : min= 608, max= 672, avg=668.53, stdev=14.66, samples=19 00:37:00.618 lat (msec) : 10=0.24%, 20=1.10%, 50=98.66% 00:37:00.618 cpu : usr=98.71%, sys=0.89%, ctx=52, majf=0, minf=18 00:37:00.618 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:00.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.618 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.618 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.618 filename2: (groupid=0, jobs=1): err= 0: pid=2720293: Wed Nov 6 14:18:45 2024 00:37:00.618 read: IOPS=673, BW=2693KiB/s (2758kB/s)(26.3MiB/10004msec) 00:37:00.618 slat (nsec): min=5669, max=86596, avg=13710.16, stdev=13069.98 00:37:00.618 clat (usec): min=10057, max=26912, avg=23650.49, stdev=1190.74 00:37:00.618 lat (usec): min=10069, max=26919, avg=23664.20, stdev=1189.48 00:37:00.618 clat percentiles (usec): 00:37:00.618 | 1.00th=[17433], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.618 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.618 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:37:00.618 | 99.00th=[24511], 99.50th=[24773], 99.90th=[26870], 99.95th=[26870], 00:37:00.618 | 99.99th=[26870] 00:37:00.618 bw ( KiB/s): min= 2560, max= 2944, per=4.17%, avg=2694.74, stdev=67.11, samples=19 00:37:00.618 iops : min= 640, max= 736, avg=673.68, stdev=16.78, samples=19 00:37:00.618 lat (msec) : 20=1.19%, 50=98.81% 00:37:00.618 cpu : usr=99.08%, sys=0.67%, ctx=10, majf=0, minf=23 00:37:00.618 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:00.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.618 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.618 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.618 filename2: (groupid=0, jobs=1): err= 0: pid=2720294: Wed Nov 6 14:18:45 2024 00:37:00.618 read: IOPS=669, BW=2679KiB/s (2743kB/s)(26.2MiB/10004msec) 00:37:00.618 slat (nsec): min=5515, max=94861, avg=20212.26, stdev=14723.73 00:37:00.618 clat (usec): min=6933, max=46347, avg=23765.88, stdev=1589.44 00:37:00.618 lat (usec): min=6939, max=46371, avg=23786.09, stdev=1589.20 00:37:00.618 clat percentiles (usec): 00:37:00.618 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.618 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.618 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:37:00.618 | 99.00th=[25560], 99.50th=[30016], 99.90th=[46400], 99.95th=[46400], 00:37:00.618 | 99.99th=[46400] 00:37:00.618 bw ( KiB/s): min= 2436, max= 2688, per=4.13%, avg=2667.68, stdev=58.37, samples=19 00:37:00.618 iops : min= 609, max= 672, avg=666.89, stdev=14.59, samples=19 00:37:00.618 lat (msec) : 10=0.09%, 20=0.66%, 50=99.25% 00:37:00.618 cpu : usr=98.99%, sys=0.73%, ctx=13, majf=0, minf=18 00:37:00.618 IO depths : 1=1.5%, 2=3.3%, 4=7.2%, 8=72.5%, 16=15.4%, 32=0.0%, >=64=0.0% 00:37:00.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.618 complete : 0=0.0%, 4=90.7%, 8=7.7%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.618 issued rwts: total=6700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.618 filename2: (groupid=0, jobs=1): err= 0: pid=2720295: Wed Nov 6 14:18:45 2024 00:37:00.618 read: IOPS=670, BW=2683KiB/s (2747kB/s)(26.2MiB/10010msec) 00:37:00.618 slat (nsec): min=5693, max=67289, avg=11690.64, stdev=8560.94 00:37:00.618 clat (usec): min=10619, max=36527, avg=23755.21, stdev=1156.55 00:37:00.618 lat (usec): min=10626, max=36543, avg=23766.90, stdev=1156.57 00:37:00.618 clat percentiles (usec): 00:37:00.618 | 1.00th=[18220], 5.00th=[23462], 10.00th=[23462], 20.00th=[23462], 00:37:00.618 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.618 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:37:00.618 | 99.00th=[26608], 99.50th=[30016], 99.90th=[31589], 99.95th=[32375], 00:37:00.618 | 99.99th=[36439] 00:37:00.618 bw ( KiB/s): min= 2560, max= 2688, per=4.15%, avg=2680.63, stdev=29.27, samples=19 00:37:00.618 iops : min= 640, max= 672, avg=670.11, stdev= 7.32, samples=19 00:37:00.618 lat (msec) : 20=1.34%, 50=98.66% 00:37:00.618 cpu : usr=98.81%, sys=0.83%, ctx=52, majf=0, minf=22 00:37:00.618 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:00.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.618 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.618 issued rwts: total=6714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.618 filename2: (groupid=0, jobs=1): err= 0: pid=2720296: Wed Nov 6 14:18:45 2024 00:37:00.618 read: IOPS=677, BW=2711KiB/s (2776kB/s)(26.5MiB/10017msec) 00:37:00.618 slat (nsec): min=5773, max=87108, avg=20641.06, stdev=14077.99 00:37:00.618 clat (usec): min=6692, max=30260, avg=23431.28, stdev=1915.60 00:37:00.618 lat (usec): min=6710, max=30292, avg=23451.92, stdev=1916.24 00:37:00.618 clat percentiles (usec): 00:37:00.618 | 1.00th=[10028], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:37:00.618 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.618 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:37:00.618 | 99.00th=[24773], 99.50th=[26608], 99.90th=[30016], 99.95th=[30278], 00:37:00.618 | 99.99th=[30278] 00:37:00.618 bw ( KiB/s): min= 2682, max= 3120, per=4.19%, avg=2709.30, stdev=96.68, samples=20 00:37:00.618 iops : min= 670, max= 780, avg=677.30, stdev=24.18, samples=20 00:37:00.618 lat (msec) : 10=0.99%, 20=1.55%, 50=97.47% 00:37:00.618 cpu : usr=99.02%, sys=0.67%, ctx=32, majf=0, minf=31 00:37:00.618 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:00.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.618 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.618 issued rwts: total=6790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.618 filename2: (groupid=0, jobs=1): err= 0: pid=2720297: Wed Nov 6 14:18:45 2024 00:37:00.618 read: IOPS=671, BW=2686KiB/s (2751kB/s)(26.2MiB/10007msec) 00:37:00.618 slat (nsec): min=5684, max=89759, avg=21336.20, stdev=15283.82 00:37:00.618 clat (usec): min=11933, max=30577, avg=23643.01, stdev=874.19 00:37:00.618 lat (usec): min=11939, max=30602, avg=23664.35, stdev=873.62 00:37:00.618 clat percentiles (usec): 00:37:00.618 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:37:00.618 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.618 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:37:00.618 | 99.00th=[24511], 99.50th=[25560], 99.90th=[26870], 99.95th=[30540], 00:37:00.618 | 99.99th=[30540] 00:37:00.618 bw ( KiB/s): min= 2560, max= 2688, per=4.15%, avg=2680.95, stdev=29.32, samples=19 00:37:00.618 iops : min= 640, max= 672, avg=670.21, stdev= 7.33, samples=19 00:37:00.618 lat (msec) : 20=0.77%, 50=99.23% 00:37:00.618 cpu : usr=98.89%, sys=0.83%, ctx=36, majf=0, minf=22 00:37:00.619 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:00.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.619 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.619 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.619 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.619 filename2: (groupid=0, jobs=1): err= 0: pid=2720298: Wed Nov 6 14:18:45 2024 00:37:00.619 read: IOPS=670, BW=2681KiB/s (2745kB/s)(26.2MiB/10004msec) 00:37:00.619 slat (nsec): min=5671, max=62036, avg=16919.81, stdev=10209.25 00:37:00.619 clat (usec): min=9332, max=46603, avg=23713.99, stdev=1444.87 00:37:00.619 lat (usec): min=9339, max=46620, avg=23730.91, stdev=1444.75 00:37:00.619 clat percentiles (usec): 00:37:00.619 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.619 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.619 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:37:00.619 | 99.00th=[24511], 99.50th=[26608], 99.90th=[46400], 99.95th=[46400], 00:37:00.619 | 99.99th=[46400] 00:37:00.619 bw ( KiB/s): min= 2432, max= 2688, per=4.14%, avg=2674.21, stdev=58.67, samples=19 00:37:00.619 iops : min= 608, max= 672, avg=668.53, stdev=14.66, samples=19 00:37:00.619 lat (msec) : 10=0.10%, 20=0.64%, 50=99.25% 00:37:00.619 cpu : usr=98.72%, sys=0.84%, ctx=91, majf=0, minf=22 00:37:00.619 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:00.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.619 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.619 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.619 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.619 filename2: (groupid=0, jobs=1): err= 0: pid=2720299: Wed Nov 6 14:18:45 2024 00:37:00.619 read: IOPS=671, BW=2685KiB/s (2749kB/s)(26.2MiB/10011msec) 00:37:00.619 slat (nsec): min=5648, max=87804, avg=24306.93, stdev=14565.82 00:37:00.619 clat (usec): min=9350, max=35991, avg=23610.28, stdev=1176.07 00:37:00.619 lat (usec): min=9356, max=36007, avg=23634.59, stdev=1176.37 00:37:00.619 clat percentiles (usec): 00:37:00.619 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:37:00.619 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:00.619 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:37:00.619 | 99.00th=[24773], 99.50th=[24773], 99.90th=[35914], 99.95th=[35914], 00:37:00.619 | 99.99th=[35914] 00:37:00.619 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2674.16, stdev=39.40, samples=19 00:37:00.619 iops : min= 640, max= 672, avg=668.47, stdev= 9.88, samples=19 00:37:00.619 lat (msec) : 10=0.10%, 20=0.61%, 50=99.29% 00:37:00.619 cpu : usr=98.51%, sys=1.03%, ctx=128, majf=0, minf=19 00:37:00.619 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:00.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.619 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.619 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.619 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.619 filename2: (groupid=0, jobs=1): err= 0: pid=2720301: Wed Nov 6 14:18:45 2024 00:37:00.619 read: IOPS=671, BW=2687KiB/s (2751kB/s)(26.2MiB/10005msec) 00:37:00.619 slat (nsec): min=5681, max=72553, avg=17337.34, stdev=10797.14 00:37:00.619 clat (usec): min=10280, max=35305, avg=23669.61, stdev=1160.87 00:37:00.619 lat (usec): min=10293, max=35342, avg=23686.94, stdev=1160.95 00:37:00.619 clat percentiles (usec): 00:37:00.619 | 1.00th=[20841], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.619 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.619 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:37:00.619 | 99.00th=[24773], 99.50th=[25560], 99.90th=[33424], 99.95th=[33817], 00:37:00.619 | 99.99th=[35390] 00:37:00.619 bw ( KiB/s): min= 2560, max= 2816, per=4.16%, avg=2687.68, stdev=73.91, samples=19 00:37:00.619 iops : min= 640, max= 704, avg=671.89, stdev=18.48, samples=19 00:37:00.619 lat (msec) : 20=0.83%, 50=99.17% 00:37:00.619 cpu : usr=98.81%, sys=0.76%, ctx=147, majf=0, minf=28 00:37:00.619 IO depths : 1=5.9%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:00.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.619 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.619 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.619 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.619 00:37:00.619 Run status group 0 (all jobs): 00:37:00.619 READ: bw=63.1MiB/s (66.2MB/s), 2679KiB/s-2763KiB/s (2743kB/s-2829kB/s), io=632MiB (663MB), run=10002-10017msec 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.619 bdev_null0 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.619 [2024-11-06 14:18:45.586834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:00.619 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.620 bdev_null1 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:00.620 { 00:37:00.620 "params": { 00:37:00.620 "name": "Nvme$subsystem", 00:37:00.620 "trtype": "$TEST_TRANSPORT", 00:37:00.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:00.620 "adrfam": "ipv4", 00:37:00.620 "trsvcid": "$NVMF_PORT", 00:37:00.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:00.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:00.620 "hdgst": ${hdgst:-false}, 00:37:00.620 "ddgst": ${ddgst:-false} 00:37:00.620 }, 00:37:00.620 "method": "bdev_nvme_attach_controller" 00:37:00.620 } 00:37:00.620 EOF 00:37:00.620 )") 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:00.620 { 00:37:00.620 "params": { 00:37:00.620 "name": "Nvme$subsystem", 00:37:00.620 "trtype": "$TEST_TRANSPORT", 00:37:00.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:00.620 "adrfam": "ipv4", 00:37:00.620 "trsvcid": "$NVMF_PORT", 00:37:00.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:00.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:00.620 "hdgst": ${hdgst:-false}, 00:37:00.620 "ddgst": ${ddgst:-false} 00:37:00.620 }, 00:37:00.620 "method": "bdev_nvme_attach_controller" 00:37:00.620 } 00:37:00.620 EOF 00:37:00.620 )") 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:00.620 "params": { 00:37:00.620 "name": "Nvme0", 00:37:00.620 "trtype": "tcp", 00:37:00.620 "traddr": "10.0.0.2", 00:37:00.620 "adrfam": "ipv4", 00:37:00.620 "trsvcid": "4420", 00:37:00.620 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:00.620 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:00.620 "hdgst": false, 00:37:00.620 "ddgst": false 00:37:00.620 }, 00:37:00.620 "method": "bdev_nvme_attach_controller" 00:37:00.620 },{ 00:37:00.620 "params": { 00:37:00.620 "name": "Nvme1", 00:37:00.620 "trtype": "tcp", 00:37:00.620 "traddr": "10.0.0.2", 00:37:00.620 "adrfam": "ipv4", 00:37:00.620 "trsvcid": "4420", 00:37:00.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:00.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:00.620 "hdgst": false, 00:37:00.620 "ddgst": false 00:37:00.620 }, 00:37:00.620 "method": "bdev_nvme_attach_controller" 00:37:00.620 }' 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:00.620 14:18:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:00.620 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:00.620 ... 00:37:00.620 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:00.620 ... 00:37:00.620 fio-3.35 00:37:00.620 Starting 4 threads 00:37:05.906 00:37:05.906 filename0: (groupid=0, jobs=1): err= 0: pid=2722603: Wed Nov 6 14:18:51 2024 00:37:05.906 read: IOPS=2933, BW=22.9MiB/s (24.0MB/s)(115MiB/5003msec) 00:37:05.906 slat (nsec): min=5514, max=46823, avg=6296.94, stdev=2138.90 00:37:05.906 clat (usec): min=1306, max=43321, avg=2712.13, stdev=973.33 00:37:05.906 lat (usec): min=1312, max=43354, avg=2718.42, stdev=973.52 00:37:05.906 clat percentiles (usec): 00:37:05.906 | 1.00th=[ 2073], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2540], 00:37:05.906 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:37:05.906 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2933], 95.00th=[ 2966], 00:37:05.906 | 99.00th=[ 3458], 99.50th=[ 3687], 99.90th=[ 4178], 99.95th=[43254], 00:37:05.906 | 99.99th=[43254] 00:37:05.906 bw ( KiB/s): min=21808, max=24016, per=25.11%, avg=23495.11, stdev=668.33, samples=9 00:37:05.906 iops : min= 2726, max= 3002, avg=2936.89, stdev=83.54, samples=9 00:37:05.906 lat (msec) : 2=0.46%, 4=99.37%, 10=0.12%, 50=0.05% 00:37:05.906 cpu : usr=96.62%, sys=3.12%, ctx=6, majf=0, minf=31 00:37:05.906 IO depths : 1=0.1%, 2=0.1%, 4=66.4%, 8=33.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:05.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.906 complete : 0=0.0%, 4=97.1%, 8=2.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.906 issued rwts: total=14675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:05.906 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:05.906 filename0: (groupid=0, jobs=1): err= 0: pid=2722604: Wed Nov 6 14:18:51 2024 00:37:05.906 read: IOPS=2960, BW=23.1MiB/s (24.3MB/s)(117MiB/5044msec) 00:37:05.906 slat (nsec): min=5516, max=46600, avg=6140.28, stdev=1589.32 00:37:05.906 clat (usec): min=1336, max=46979, avg=2680.29, stdev=947.55 00:37:05.906 lat (usec): min=1346, max=46985, avg=2686.43, stdev=947.48 00:37:05.906 clat percentiles (usec): 00:37:05.906 | 1.00th=[ 1909], 5.00th=[ 2073], 10.00th=[ 2212], 20.00th=[ 2376], 00:37:05.907 | 30.00th=[ 2507], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2671], 00:37:05.907 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 3228], 95.00th=[ 3621], 00:37:05.907 | 99.00th=[ 4015], 99.50th=[ 4178], 99.90th=[ 4424], 99.95th=[ 7242], 00:37:05.907 | 99.99th=[46924] 00:37:05.907 bw ( KiB/s): min=23184, max=24592, per=25.53%, avg=23889.60, stdev=463.57, samples=10 00:37:05.907 iops : min= 2898, max= 3074, avg=2986.20, stdev=57.95, samples=10 00:37:05.907 lat (msec) : 2=2.40%, 4=95.96%, 10=1.60%, 50=0.04% 00:37:05.907 cpu : usr=97.40%, sys=2.34%, ctx=9, majf=0, minf=48 00:37:05.907 IO depths : 1=0.1%, 2=0.6%, 4=69.5%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:05.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.907 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.907 issued rwts: total=14934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:05.907 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:05.907 filename1: (groupid=0, jobs=1): err= 0: pid=2722605: Wed Nov 6 14:18:51 2024 00:37:05.907 read: IOPS=2941, BW=23.0MiB/s (24.1MB/s)(115MiB/5004msec) 00:37:05.907 slat (nsec): min=5517, max=41727, avg=6223.53, stdev=1639.36 00:37:05.907 clat (usec): min=1412, max=7248, avg=2702.68, stdev=276.95 00:37:05.907 lat (usec): min=1418, max=7255, avg=2708.91, stdev=276.89 00:37:05.907 clat percentiles (usec): 00:37:05.907 | 1.00th=[ 2073], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2573], 00:37:05.907 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:37:05.907 | 70.00th=[ 2704], 80.00th=[ 2769], 90.00th=[ 2933], 95.00th=[ 2999], 00:37:05.907 | 99.00th=[ 3949], 99.50th=[ 4228], 99.90th=[ 4948], 99.95th=[ 5735], 00:37:05.907 | 99.99th=[ 7242] 00:37:05.907 bw ( KiB/s): min=22896, max=23952, per=25.15%, avg=23539.20, stdev=288.28, samples=10 00:37:05.907 iops : min= 2862, max= 2994, avg=2942.40, stdev=36.03, samples=10 00:37:05.907 lat (msec) : 2=0.60%, 4=98.47%, 10=0.93% 00:37:05.907 cpu : usr=96.58%, sys=3.16%, ctx=6, majf=0, minf=65 00:37:05.907 IO depths : 1=0.1%, 2=0.1%, 4=72.4%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:05.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.907 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.907 issued rwts: total=14717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:05.907 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:05.907 filename1: (groupid=0, jobs=1): err= 0: pid=2722606: Wed Nov 6 14:18:51 2024 00:37:05.907 read: IOPS=2933, BW=22.9MiB/s (24.0MB/s)(115MiB/5004msec) 00:37:05.907 slat (nsec): min=5519, max=63183, avg=7480.02, stdev=2861.03 00:37:05.907 clat (usec): min=1441, max=6548, avg=2705.24, stdev=258.82 00:37:05.907 lat (usec): min=1451, max=6556, avg=2712.72, stdev=258.79 00:37:05.907 clat percentiles (usec): 00:37:05.907 | 1.00th=[ 2180], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2573], 00:37:05.907 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:37:05.907 | 70.00th=[ 2704], 80.00th=[ 2769], 90.00th=[ 2933], 95.00th=[ 2966], 00:37:05.907 | 99.00th=[ 3916], 99.50th=[ 4080], 99.90th=[ 4490], 99.95th=[ 6390], 00:37:05.907 | 99.99th=[ 6521] 00:37:05.907 bw ( KiB/s): min=23088, max=23856, per=25.09%, avg=23478.40, stdev=227.68, samples=10 00:37:05.907 iops : min= 2886, max= 2982, avg=2934.80, stdev=28.46, samples=10 00:37:05.907 lat (msec) : 2=0.35%, 4=98.85%, 10=0.80% 00:37:05.907 cpu : usr=96.56%, sys=3.18%, ctx=5, majf=0, minf=37 00:37:05.907 IO depths : 1=0.1%, 2=0.1%, 4=73.1%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:05.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.907 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.907 issued rwts: total=14679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:05.907 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:05.907 00:37:05.907 Run status group 0 (all jobs): 00:37:05.907 READ: bw=91.4MiB/s (95.8MB/s), 22.9MiB/s-23.1MiB/s (24.0MB/s-24.3MB/s), io=461MiB (483MB), run=5003-5044msec 00:37:05.907 14:18:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:05.907 14:18:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:05.907 14:18:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:05.907 14:18:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:05.907 14:18:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:05.907 14:18:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:05.907 14:18:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.907 14:18:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.907 14:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.907 14:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:05.907 14:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.907 14:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.907 14:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.907 14:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:05.907 14:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:05.907 14:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:05.907 14:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:05.907 14:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.907 14:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.907 14:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.907 14:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:05.907 14:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.907 14:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.907 14:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.907 00:37:05.907 real 0m24.700s 00:37:05.907 user 5m19.297s 00:37:05.907 sys 0m4.600s 00:37:05.907 14:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:05.907 14:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.907 ************************************ 00:37:05.907 END TEST fio_dif_rand_params 00:37:05.907 ************************************ 00:37:05.907 14:18:52 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:05.907 14:18:52 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:05.907 14:18:52 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:05.907 14:18:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:05.907 ************************************ 00:37:05.907 START TEST fio_dif_digest 00:37:05.907 ************************************ 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:05.907 bdev_null0 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:05.907 [2024-11-06 14:18:52.164676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.907 14:18:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:05.907 { 00:37:05.907 "params": { 00:37:05.908 "name": "Nvme$subsystem", 00:37:05.908 "trtype": "$TEST_TRANSPORT", 00:37:05.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:05.908 "adrfam": "ipv4", 00:37:05.908 "trsvcid": "$NVMF_PORT", 00:37:05.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:05.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:05.908 "hdgst": ${hdgst:-false}, 00:37:05.908 "ddgst": ${ddgst:-false} 00:37:05.908 }, 00:37:05.908 "method": "bdev_nvme_attach_controller" 00:37:05.908 } 00:37:05.908 EOF 00:37:05.908 )") 00:37:05.908 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:05.908 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:05.908 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:05.908 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:05.908 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:05.908 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:05.908 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.908 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:37:05.908 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:05.908 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:05.908 14:18:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:05.908 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.908 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:05.908 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:37:05.908 14:18:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:05.908 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:05.908 14:18:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:06.167 14:18:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:06.167 14:18:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:06.167 "params": { 00:37:06.167 "name": "Nvme0", 00:37:06.167 "trtype": "tcp", 00:37:06.167 "traddr": "10.0.0.2", 00:37:06.167 "adrfam": "ipv4", 00:37:06.167 "trsvcid": "4420", 00:37:06.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:06.167 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:06.167 "hdgst": true, 00:37:06.167 "ddgst": true 00:37:06.167 }, 00:37:06.167 "method": "bdev_nvme_attach_controller" 00:37:06.167 }' 00:37:06.167 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:06.167 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:06.168 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:06.168 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:06.168 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:06.168 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:06.168 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:06.168 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:06.168 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:06.168 14:18:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:06.427 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:06.427 ... 00:37:06.427 fio-3.35 00:37:06.427 Starting 3 threads 00:37:18.768 00:37:18.768 filename0: (groupid=0, jobs=1): err= 0: pid=2723967: Wed Nov 6 14:19:03 2024 00:37:18.768 read: IOPS=349, BW=43.7MiB/s (45.8MB/s)(439MiB/10047msec) 00:37:18.768 slat (nsec): min=5861, max=56305, avg=9148.85, stdev=2170.01 00:37:18.768 clat (usec): min=5215, max=48320, avg=8562.31, stdev=1583.06 00:37:18.768 lat (usec): min=5224, max=48330, avg=8571.46, stdev=1583.16 00:37:18.768 clat percentiles (usec): 00:37:18.768 | 1.00th=[ 5997], 5.00th=[ 6587], 10.00th=[ 6915], 20.00th=[ 7308], 00:37:18.768 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8979], 00:37:18.768 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10552], 00:37:18.768 | 99.00th=[11338], 99.50th=[11731], 99.90th=[12649], 99.95th=[46924], 00:37:18.768 | 99.99th=[48497] 00:37:18.768 bw ( KiB/s): min=42752, max=48384, per=40.35%, avg=44915.70, stdev=1647.52, samples=20 00:37:18.768 iops : min= 334, max= 378, avg=350.80, stdev=12.89, samples=20 00:37:18.768 lat (msec) : 10=86.30%, 20=13.64%, 50=0.06% 00:37:18.768 cpu : usr=93.87%, sys=5.85%, ctx=26, majf=0, minf=152 00:37:18.768 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:18.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.768 issued rwts: total=3511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.768 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:18.768 filename0: (groupid=0, jobs=1): err= 0: pid=2723968: Wed Nov 6 14:19:03 2024 00:37:18.768 read: IOPS=336, BW=42.1MiB/s (44.1MB/s)(422MiB/10043msec) 00:37:18.768 slat (nsec): min=5925, max=64804, avg=7983.75, stdev=2114.94 00:37:18.768 clat (usec): min=5185, max=48736, avg=8894.59, stdev=1726.63 00:37:18.768 lat (usec): min=5193, max=48743, avg=8902.58, stdev=1726.50 00:37:18.768 clat percentiles (usec): 00:37:18.768 | 1.00th=[ 6194], 5.00th=[ 6718], 10.00th=[ 7046], 20.00th=[ 7439], 00:37:18.768 | 30.00th=[ 7832], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9372], 00:37:18.768 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10814], 95.00th=[11207], 00:37:18.768 | 99.00th=[11994], 99.50th=[12256], 99.90th=[14091], 99.95th=[46400], 00:37:18.768 | 99.99th=[48497] 00:37:18.768 bw ( KiB/s): min=40704, max=45915, per=38.83%, avg=43230.15, stdev=1623.22, samples=20 00:37:18.768 iops : min= 318, max= 358, avg=337.70, stdev=12.62, samples=20 00:37:18.768 lat (msec) : 10=73.63%, 20=26.31%, 50=0.06% 00:37:18.768 cpu : usr=93.63%, sys=6.11%, ctx=18, majf=0, minf=181 00:37:18.768 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:18.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.768 issued rwts: total=3379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.768 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:18.768 filename0: (groupid=0, jobs=1): err= 0: pid=2723969: Wed Nov 6 14:19:03 2024 00:37:18.768 read: IOPS=184, BW=23.0MiB/s (24.1MB/s)(231MiB/10040msec) 00:37:18.768 slat (nsec): min=5879, max=34464, avg=7701.15, stdev=1641.60 00:37:18.768 clat (usec): min=7035, max=91712, avg=16289.51, stdev=15999.06 00:37:18.768 lat (usec): min=7044, max=91719, avg=16297.21, stdev=15999.12 00:37:18.768 clat percentiles (usec): 00:37:18.768 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:37:18.768 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:37:18.768 | 70.00th=[10552], 80.00th=[11207], 90.00th=[50070], 95.00th=[51119], 00:37:18.768 | 99.00th=[53740], 99.50th=[90702], 99.90th=[91751], 99.95th=[91751], 00:37:18.768 | 99.99th=[91751] 00:37:18.768 bw ( KiB/s): min=15104, max=31488, per=21.21%, avg=23616.00, stdev=4901.00, samples=20 00:37:18.768 iops : min= 118, max= 246, avg=184.50, stdev=38.29, samples=20 00:37:18.768 lat (msec) : 10=50.32%, 20=34.74%, 50=4.27%, 100=10.66% 00:37:18.768 cpu : usr=95.70%, sys=4.03%, ctx=20, majf=0, minf=87 00:37:18.769 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:18.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.769 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.769 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:18.769 00:37:18.769 Run status group 0 (all jobs): 00:37:18.769 READ: bw=109MiB/s (114MB/s), 23.0MiB/s-43.7MiB/s (24.1MB/s-45.8MB/s), io=1092MiB (1145MB), run=10040-10047msec 00:37:18.769 14:19:03 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:18.769 14:19:03 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:18.769 14:19:03 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:18.769 14:19:03 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:18.769 14:19:03 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:18.769 14:19:03 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:18.769 14:19:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.769 14:19:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:18.769 14:19:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.769 14:19:03 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:18.769 14:19:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.769 14:19:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:18.769 14:19:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.769 00:37:18.769 real 0m11.224s 00:37:18.769 user 0m44.201s 00:37:18.769 sys 0m1.960s 00:37:18.769 14:19:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:18.769 14:19:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:18.769 ************************************ 00:37:18.769 END TEST fio_dif_digest 00:37:18.769 ************************************ 00:37:18.769 14:19:03 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:18.769 14:19:03 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:18.769 14:19:03 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:18.769 14:19:03 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:18.769 14:19:03 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:18.769 14:19:03 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:18.769 14:19:03 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:18.769 14:19:03 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:18.769 rmmod nvme_tcp 00:37:18.769 rmmod nvme_fabrics 00:37:18.769 rmmod nvme_keyring 00:37:18.769 14:19:03 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:18.769 14:19:03 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:18.769 14:19:03 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:18.769 14:19:03 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2713180 ']' 00:37:18.769 14:19:03 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2713180 00:37:18.769 14:19:03 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 2713180 ']' 00:37:18.769 14:19:03 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 2713180 00:37:18.769 14:19:03 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:37:18.769 14:19:03 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:18.769 14:19:03 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2713180 00:37:18.769 14:19:03 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:18.769 14:19:03 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:18.769 14:19:03 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2713180' 00:37:18.769 killing process with pid 2713180 00:37:18.769 14:19:03 nvmf_dif -- common/autotest_common.sh@971 -- # kill 2713180 00:37:18.769 14:19:03 nvmf_dif -- common/autotest_common.sh@976 -- # wait 2713180 00:37:18.769 14:19:03 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:18.769 14:19:03 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:20.683 Waiting for block devices as requested 00:37:20.943 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:20.943 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:20.943 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:21.206 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:21.206 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:21.206 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:21.206 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:21.466 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:21.466 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:21.727 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:21.727 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:21.727 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:21.986 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:21.986 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:21.986 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:22.246 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:22.246 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:22.507 14:19:08 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:22.507 14:19:08 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:22.507 14:19:08 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:22.507 14:19:08 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:22.507 14:19:08 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:22.507 14:19:08 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:22.507 14:19:08 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:22.507 14:19:08 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:22.507 14:19:08 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:22.507 14:19:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:22.507 14:19:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:25.086 14:19:10 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:25.086 00:37:25.086 real 1m19.010s 00:37:25.086 user 7m58.446s 00:37:25.086 sys 0m22.651s 00:37:25.086 14:19:10 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:25.086 14:19:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:25.086 ************************************ 00:37:25.086 END TEST nvmf_dif 00:37:25.086 ************************************ 00:37:25.086 14:19:10 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:25.086 14:19:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:25.086 14:19:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:25.086 14:19:10 -- common/autotest_common.sh@10 -- # set +x 00:37:25.086 ************************************ 00:37:25.086 START TEST nvmf_abort_qd_sizes 00:37:25.086 ************************************ 00:37:25.086 14:19:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:25.086 * Looking for test storage... 00:37:25.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:25.086 14:19:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:25.086 14:19:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:37:25.086 14:19:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:25.086 14:19:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:25.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.087 --rc genhtml_branch_coverage=1 00:37:25.087 --rc genhtml_function_coverage=1 00:37:25.087 --rc genhtml_legend=1 00:37:25.087 --rc geninfo_all_blocks=1 00:37:25.087 --rc geninfo_unexecuted_blocks=1 00:37:25.087 00:37:25.087 ' 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:25.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.087 --rc genhtml_branch_coverage=1 00:37:25.087 --rc genhtml_function_coverage=1 00:37:25.087 --rc genhtml_legend=1 00:37:25.087 --rc geninfo_all_blocks=1 00:37:25.087 --rc geninfo_unexecuted_blocks=1 00:37:25.087 00:37:25.087 ' 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:25.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.087 --rc genhtml_branch_coverage=1 00:37:25.087 --rc genhtml_function_coverage=1 00:37:25.087 --rc genhtml_legend=1 00:37:25.087 --rc geninfo_all_blocks=1 00:37:25.087 --rc geninfo_unexecuted_blocks=1 00:37:25.087 00:37:25.087 ' 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:25.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.087 --rc genhtml_branch_coverage=1 00:37:25.087 --rc genhtml_function_coverage=1 00:37:25.087 --rc genhtml_legend=1 00:37:25.087 --rc geninfo_all_blocks=1 00:37:25.087 --rc geninfo_unexecuted_blocks=1 00:37:25.087 00:37:25.087 ' 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:25.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:25.087 14:19:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:33.227 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:33.227 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:33.227 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:33.227 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:33.227 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:33.227 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:33.227 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:33.227 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:33.227 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:33.227 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:33.227 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:33.227 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:33.228 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:33.228 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:33.228 Found net devices under 0000:31:00.0: cvl_0_0 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:33.228 Found net devices under 0000:31:00.1: cvl_0_1 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:33.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:33.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:37:33.228 00:37:33.228 --- 10.0.0.2 ping statistics --- 00:37:33.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.228 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:33.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:33.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:37:33.228 00:37:33.228 --- 10.0.0.1 ping statistics --- 00:37:33.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.228 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:33.228 14:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:35.776 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:35.776 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:35.776 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:35.776 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:35.776 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:35.776 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:35.776 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:35.776 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:35.776 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:35.776 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:35.776 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:35.776 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:35.776 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:35.776 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:35.776 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:35.776 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:35.776 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:36.348 14:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:36.348 14:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:36.348 14:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:36.349 14:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:36.349 14:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:36.349 14:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:36.349 14:19:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:36.349 14:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:36.349 14:19:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:36.349 14:19:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:36.349 14:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2733487 00:37:36.349 14:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2733487 00:37:36.349 14:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:36.349 14:19:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 2733487 ']' 00:37:36.349 14:19:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:36.349 14:19:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:36.349 14:19:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:36.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:36.349 14:19:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:36.349 14:19:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:36.349 [2024-11-06 14:19:22.449658] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:37:36.349 [2024-11-06 14:19:22.449706] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:36.349 [2024-11-06 14:19:22.545074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:36.349 [2024-11-06 14:19:22.582665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:36.349 [2024-11-06 14:19:22.582698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:36.349 [2024-11-06 14:19:22.582706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:36.349 [2024-11-06 14:19:22.582713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:36.349 [2024-11-06 14:19:22.582719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:36.349 [2024-11-06 14:19:22.584258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:36.349 [2024-11-06 14:19:22.584409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:36.349 [2024-11-06 14:19:22.584562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:36.349 [2024-11-06 14:19:22.584563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:37.293 14:19:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:37.293 ************************************ 00:37:37.293 START TEST spdk_target_abort 00:37:37.293 ************************************ 00:37:37.293 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:37:37.293 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:37.293 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:37.293 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.293 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:37.555 spdk_targetn1 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:37.555 [2024-11-06 14:19:23.650634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:37.555 [2024-11-06 14:19:23.698992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:37.555 14:19:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:37.817 [2024-11-06 14:19:24.018861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:40 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:37.817 [2024-11-06 14:19:24.018908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0006 p:1 m:0 dnr:0 00:37:37.817 [2024-11-06 14:19:24.020861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:160 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:37.817 [2024-11-06 14:19:24.020891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0015 p:1 m:0 dnr:0 00:37:37.817 [2024-11-06 14:19:24.026247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:224 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:37.817 [2024-11-06 14:19:24.026272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:001d p:1 m:0 dnr:0 00:37:37.817 [2024-11-06 14:19:24.068480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:312 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:37.817 [2024-11-06 14:19:24.068509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:002a p:1 m:0 dnr:0 00:37:37.817 [2024-11-06 14:19:24.090456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:904 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:37.817 [2024-11-06 14:19:24.090487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:37:38.079 [2024-11-06 14:19:24.098293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1136 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:38.079 [2024-11-06 14:19:24.098321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:008f p:1 m:0 dnr:0 00:37:38.079 [2024-11-06 14:19:24.114368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1648 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:38.079 [2024-11-06 14:19:24.114398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00cf p:1 m:0 dnr:0 00:37:38.079 [2024-11-06 14:19:24.122354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1904 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:38.079 [2024-11-06 14:19:24.122383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00f2 p:1 m:0 dnr:0 00:37:38.079 [2024-11-06 14:19:24.142300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2640 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:38.079 [2024-11-06 14:19:24.142330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:38.079 [2024-11-06 14:19:24.150450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2904 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:37:38.079 [2024-11-06 14:19:24.150478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:38.079 [2024-11-06 14:19:24.173311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3536 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:37:38.079 [2024-11-06 14:19:24.173341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00bb p:0 m:0 dnr:0 00:37:38.079 [2024-11-06 14:19:24.188429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:4008 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:38.079 [2024-11-06 14:19:24.188460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00f6 p:0 m:0 dnr:0 00:37:41.382 Initializing NVMe Controllers 00:37:41.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:41.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:41.382 Initialization complete. Launching workers. 00:37:41.382 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11472, failed: 12 00:37:41.382 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2308, failed to submit 9176 00:37:41.382 success 684, unsuccessful 1624, failed 0 00:37:41.382 14:19:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:41.382 14:19:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:41.382 [2024-11-06 14:19:27.323997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:432 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:37:41.382 [2024-11-06 14:19:27.324035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:37:41.382 [2024-11-06 14:19:27.363969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:1368 len:8 PRP1 0x200004e56000 PRP2 0x0 00:37:41.382 [2024-11-06 14:19:27.363993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:00b9 p:1 m:0 dnr:0 00:37:41.382 [2024-11-06 14:19:27.391940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:1984 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:37:41.382 [2024-11-06 14:19:27.391961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:0000 p:1 m:0 dnr:0 00:37:41.382 [2024-11-06 14:19:27.403837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:2112 len:8 PRP1 0x200004e58000 PRP2 0x0 00:37:41.382 [2024-11-06 14:19:27.403860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:37:41.382 [2024-11-06 14:19:27.475914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:3744 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:37:41.382 [2024-11-06 14:19:27.475937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:00da p:0 m:0 dnr:0 00:37:44.681 Initializing NVMe Controllers 00:37:44.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:44.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:44.681 Initialization complete. Launching workers. 00:37:44.681 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8472, failed: 5 00:37:44.681 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1212, failed to submit 7265 00:37:44.681 success 339, unsuccessful 873, failed 0 00:37:44.681 14:19:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:44.681 14:19:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:47.226 [2024-11-06 14:19:33.432254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:172 nsid:1 lba:312520 len:8 PRP1 0x200004b24000 PRP2 0x0 00:37:47.226 [2024-11-06 14:19:33.432292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:172 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:47.796 Initializing NVMe Controllers 00:37:47.796 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:47.796 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:47.796 Initialization complete. Launching workers. 00:37:47.796 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43794, failed: 1 00:37:47.796 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2950, failed to submit 40845 00:37:47.796 success 575, unsuccessful 2375, failed 0 00:37:47.796 14:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:47.796 14:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.796 14:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:47.796 14:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.796 14:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:47.796 14:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.796 14:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2733487 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 2733487 ']' 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 2733487 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2733487 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2733487' 00:37:49.709 killing process with pid 2733487 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 2733487 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 2733487 00:37:49.709 00:37:49.709 real 0m12.464s 00:37:49.709 user 0m50.674s 00:37:49.709 sys 0m2.068s 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:49.709 ************************************ 00:37:49.709 END TEST spdk_target_abort 00:37:49.709 ************************************ 00:37:49.709 14:19:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:49.709 14:19:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:49.709 14:19:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:49.709 14:19:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:49.709 ************************************ 00:37:49.709 START TEST kernel_target_abort 00:37:49.709 ************************************ 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:49.709 14:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:53.009 Waiting for block devices as requested 00:37:53.269 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:53.269 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:53.269 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:53.530 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:53.530 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:53.530 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:53.791 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:53.791 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:53.791 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:54.051 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:54.051 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:54.311 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:54.311 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:54.311 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:54.311 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:54.572 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:54.572 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:54.832 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:54.832 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:54.832 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:54.832 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:54.832 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:54.832 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:54.832 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:54.832 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:54.832 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:55.094 No valid GPT data, bailing 00:37:55.094 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:55.094 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:55.094 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:55.094 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:55.094 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:55.094 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:55.094 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:55.094 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:55.094 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:55.094 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:55.094 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:55.094 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:55.094 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:55.094 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:55.094 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:55.094 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:55.094 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:55.094 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:37:55.094 00:37:55.094 Discovery Log Number of Records 2, Generation counter 2 00:37:55.094 =====Discovery Log Entry 0====== 00:37:55.095 trtype: tcp 00:37:55.095 adrfam: ipv4 00:37:55.095 subtype: current discovery subsystem 00:37:55.095 treq: not specified, sq flow control disable supported 00:37:55.095 portid: 1 00:37:55.095 trsvcid: 4420 00:37:55.095 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:55.095 traddr: 10.0.0.1 00:37:55.095 eflags: none 00:37:55.095 sectype: none 00:37:55.095 =====Discovery Log Entry 1====== 00:37:55.095 trtype: tcp 00:37:55.095 adrfam: ipv4 00:37:55.095 subtype: nvme subsystem 00:37:55.095 treq: not specified, sq flow control disable supported 00:37:55.095 portid: 1 00:37:55.095 trsvcid: 4420 00:37:55.095 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:55.095 traddr: 10.0.0.1 00:37:55.095 eflags: none 00:37:55.095 sectype: none 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:55.095 14:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:58.396 Initializing NVMe Controllers 00:37:58.396 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:58.396 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:58.396 Initialization complete. Launching workers. 00:37:58.396 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67724, failed: 0 00:37:58.396 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67724, failed to submit 0 00:37:58.396 success 0, unsuccessful 67724, failed 0 00:37:58.396 14:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:58.396 14:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:01.696 Initializing NVMe Controllers 00:38:01.696 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:01.696 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:01.696 Initialization complete. Launching workers. 00:38:01.696 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 112132, failed: 0 00:38:01.696 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28230, failed to submit 83902 00:38:01.696 success 0, unsuccessful 28230, failed 0 00:38:01.696 14:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:01.696 14:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:05.004 Initializing NVMe Controllers 00:38:05.004 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:05.004 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:05.004 Initialization complete. Launching workers. 00:38:05.004 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145809, failed: 0 00:38:05.004 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36482, failed to submit 109327 00:38:05.004 success 0, unsuccessful 36482, failed 0 00:38:05.004 14:19:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:05.004 14:19:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:05.004 14:19:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:05.004 14:19:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:05.004 14:19:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:05.004 14:19:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:05.004 14:19:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:05.004 14:19:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:05.004 14:19:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:05.004 14:19:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:08.305 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:08.305 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:08.305 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:08.305 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:08.305 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:08.305 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:08.305 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:08.305 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:08.305 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:08.305 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:08.305 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:08.305 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:08.305 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:08.305 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:08.305 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:08.305 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:10.218 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:10.218 00:38:10.218 real 0m20.510s 00:38:10.218 user 0m9.905s 00:38:10.218 sys 0m6.253s 00:38:10.218 14:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:10.218 14:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:10.218 ************************************ 00:38:10.218 END TEST kernel_target_abort 00:38:10.218 ************************************ 00:38:10.218 14:19:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:10.218 14:19:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:10.218 14:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:10.218 14:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:10.218 14:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:10.218 14:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:10.218 14:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:10.218 14:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:10.218 rmmod nvme_tcp 00:38:10.218 rmmod nvme_fabrics 00:38:10.218 rmmod nvme_keyring 00:38:10.479 14:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:10.479 14:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:10.479 14:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:10.479 14:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2733487 ']' 00:38:10.479 14:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2733487 00:38:10.479 14:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 2733487 ']' 00:38:10.479 14:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 2733487 00:38:10.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2733487) - No such process 00:38:10.479 14:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 2733487 is not found' 00:38:10.479 Process with pid 2733487 is not found 00:38:10.479 14:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:10.479 14:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:13.780 Waiting for block devices as requested 00:38:13.780 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:13.780 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:14.042 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:14.042 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:14.042 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:14.304 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:14.304 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:14.304 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:14.571 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:14.571 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:14.892 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:14.892 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:14.892 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:14.892 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:15.231 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:15.231 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:15.231 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:15.491 14:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:15.491 14:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:15.491 14:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:15.491 14:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:15.491 14:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:15.491 14:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:15.491 14:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:15.491 14:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:15.491 14:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:15.491 14:20:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:15.491 14:20:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:18.036 14:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:18.036 00:38:18.036 real 0m52.945s 00:38:18.036 user 1m6.024s 00:38:18.036 sys 0m19.423s 00:38:18.036 14:20:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:18.036 14:20:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:18.036 ************************************ 00:38:18.036 END TEST nvmf_abort_qd_sizes 00:38:18.036 ************************************ 00:38:18.036 14:20:03 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:18.036 14:20:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:18.036 14:20:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:18.036 14:20:03 -- common/autotest_common.sh@10 -- # set +x 00:38:18.036 ************************************ 00:38:18.036 START TEST keyring_file 00:38:18.036 ************************************ 00:38:18.036 14:20:03 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:18.036 * Looking for test storage... 00:38:18.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:18.036 14:20:03 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:18.036 14:20:03 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:38:18.036 14:20:03 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:18.036 14:20:04 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:18.036 14:20:04 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:18.036 14:20:04 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:18.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.036 --rc genhtml_branch_coverage=1 00:38:18.036 --rc genhtml_function_coverage=1 00:38:18.036 --rc genhtml_legend=1 00:38:18.036 --rc geninfo_all_blocks=1 00:38:18.036 --rc geninfo_unexecuted_blocks=1 00:38:18.036 00:38:18.036 ' 00:38:18.036 14:20:04 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:18.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.036 --rc genhtml_branch_coverage=1 00:38:18.036 --rc genhtml_function_coverage=1 00:38:18.036 --rc genhtml_legend=1 00:38:18.036 --rc geninfo_all_blocks=1 00:38:18.036 --rc geninfo_unexecuted_blocks=1 00:38:18.036 00:38:18.036 ' 00:38:18.036 14:20:04 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:18.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.036 --rc genhtml_branch_coverage=1 00:38:18.036 --rc genhtml_function_coverage=1 00:38:18.036 --rc genhtml_legend=1 00:38:18.036 --rc geninfo_all_blocks=1 00:38:18.036 --rc geninfo_unexecuted_blocks=1 00:38:18.036 00:38:18.036 ' 00:38:18.036 14:20:04 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:18.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.036 --rc genhtml_branch_coverage=1 00:38:18.036 --rc genhtml_function_coverage=1 00:38:18.036 --rc genhtml_legend=1 00:38:18.036 --rc geninfo_all_blocks=1 00:38:18.036 --rc geninfo_unexecuted_blocks=1 00:38:18.036 00:38:18.036 ' 00:38:18.036 14:20:04 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:18.036 14:20:04 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:18.036 14:20:04 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:18.036 14:20:04 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:18.036 14:20:04 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:18.036 14:20:04 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:18.036 14:20:04 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:18.036 14:20:04 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:18.036 14:20:04 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:18.036 14:20:04 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:18.036 14:20:04 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:18.036 14:20:04 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:18.036 14:20:04 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:18.036 14:20:04 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:18.036 14:20:04 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:18.036 14:20:04 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:18.036 14:20:04 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:18.036 14:20:04 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:18.036 14:20:04 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:18.036 14:20:04 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:18.036 14:20:04 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:18.037 14:20:04 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:18.037 14:20:04 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:18.037 14:20:04 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:18.037 14:20:04 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.037 14:20:04 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.037 14:20:04 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.037 14:20:04 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:18.037 14:20:04 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:18.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:18.037 14:20:04 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:18.037 14:20:04 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:18.037 14:20:04 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:18.037 14:20:04 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:18.037 14:20:04 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:18.037 14:20:04 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QYU80wIGX8 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QYU80wIGX8 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QYU80wIGX8 00:38:18.037 14:20:04 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.QYU80wIGX8 00:38:18.037 14:20:04 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dQbBjl8Z9n 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:18.037 14:20:04 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dQbBjl8Z9n 00:38:18.037 14:20:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dQbBjl8Z9n 00:38:18.037 14:20:04 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.dQbBjl8Z9n 00:38:18.037 14:20:04 keyring_file -- keyring/file.sh@30 -- # tgtpid=2743821 00:38:18.037 14:20:04 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2743821 00:38:18.037 14:20:04 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:18.037 14:20:04 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 2743821 ']' 00:38:18.037 14:20:04 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:18.037 14:20:04 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:18.037 14:20:04 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:18.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:18.037 14:20:04 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:18.037 14:20:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:18.037 [2024-11-06 14:20:04.306304] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:38:18.037 [2024-11-06 14:20:04.306380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2743821 ] 00:38:18.298 [2024-11-06 14:20:04.399778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:18.298 [2024-11-06 14:20:04.453153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:18.869 14:20:05 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:18.869 14:20:05 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:18.869 14:20:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:18.869 14:20:05 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.869 14:20:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:18.869 [2024-11-06 14:20:05.136034] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:19.130 null0 00:38:19.130 [2024-11-06 14:20:05.168080] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:19.130 [2024-11-06 14:20:05.168455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.130 14:20:05 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:19.130 [2024-11-06 14:20:05.200139] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:19.130 request: 00:38:19.130 { 00:38:19.130 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:19.130 "secure_channel": false, 00:38:19.130 "listen_address": { 00:38:19.130 "trtype": "tcp", 00:38:19.130 "traddr": "127.0.0.1", 00:38:19.130 "trsvcid": "4420" 00:38:19.130 }, 00:38:19.130 "method": "nvmf_subsystem_add_listener", 00:38:19.130 "req_id": 1 00:38:19.130 } 00:38:19.130 Got JSON-RPC error response 00:38:19.130 response: 00:38:19.130 { 00:38:19.130 "code": -32602, 00:38:19.130 "message": "Invalid parameters" 00:38:19.130 } 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:19.130 14:20:05 keyring_file -- keyring/file.sh@47 -- # bperfpid=2743904 00:38:19.130 14:20:05 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2743904 /var/tmp/bperf.sock 00:38:19.130 14:20:05 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 2743904 ']' 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:19.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:19.130 14:20:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:19.130 [2024-11-06 14:20:05.262382] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:38:19.130 [2024-11-06 14:20:05.262447] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2743904 ] 00:38:19.130 [2024-11-06 14:20:05.356446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:19.391 [2024-11-06 14:20:05.409125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:19.963 14:20:06 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:19.963 14:20:06 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:19.963 14:20:06 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QYU80wIGX8 00:38:19.963 14:20:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QYU80wIGX8 00:38:20.223 14:20:06 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.dQbBjl8Z9n 00:38:20.223 14:20:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.dQbBjl8Z9n 00:38:20.223 14:20:06 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:20.223 14:20:06 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:20.223 14:20:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:20.223 14:20:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:20.223 14:20:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:20.484 14:20:06 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.QYU80wIGX8 == \/\t\m\p\/\t\m\p\.\Q\Y\U\8\0\w\I\G\X\8 ]] 00:38:20.484 14:20:06 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:20.484 14:20:06 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:20.484 14:20:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:20.484 14:20:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:20.484 14:20:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:20.745 14:20:06 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.dQbBjl8Z9n == \/\t\m\p\/\t\m\p\.\d\Q\b\B\j\l\8\Z\9\n ]] 00:38:20.745 14:20:06 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:20.745 14:20:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:20.745 14:20:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:20.745 14:20:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:20.745 14:20:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:20.745 14:20:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:21.006 14:20:07 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:21.006 14:20:07 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:21.006 14:20:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:21.006 14:20:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:21.006 14:20:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.006 14:20:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:21.006 14:20:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:21.006 14:20:07 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:21.006 14:20:07 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:21.006 14:20:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:21.268 [2024-11-06 14:20:07.401275] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:21.268 nvme0n1 00:38:21.268 14:20:07 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:21.268 14:20:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:21.268 14:20:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:21.268 14:20:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.268 14:20:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:21.268 14:20:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:21.529 14:20:07 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:21.529 14:20:07 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:21.529 14:20:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:21.529 14:20:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:21.529 14:20:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.529 14:20:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:21.529 14:20:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:21.791 14:20:07 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:21.791 14:20:07 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:21.791 Running I/O for 1 seconds... 00:38:22.734 17901.00 IOPS, 69.93 MiB/s 00:38:22.734 Latency(us) 00:38:22.734 [2024-11-06T13:20:09.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.734 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:22.734 nvme0n1 : 1.00 17962.84 70.17 0.00 0.00 7113.41 3631.79 20643.84 00:38:22.734 [2024-11-06T13:20:09.014Z] =================================================================================================================== 00:38:22.734 [2024-11-06T13:20:09.014Z] Total : 17962.84 70.17 0.00 0.00 7113.41 3631.79 20643.84 00:38:22.734 { 00:38:22.734 "results": [ 00:38:22.734 { 00:38:22.734 "job": "nvme0n1", 00:38:22.734 "core_mask": "0x2", 00:38:22.734 "workload": "randrw", 00:38:22.734 "percentage": 50, 00:38:22.734 "status": "finished", 00:38:22.734 "queue_depth": 128, 00:38:22.734 "io_size": 4096, 00:38:22.734 "runtime": 1.003683, 00:38:22.734 "iops": 17962.842849784243, 00:38:22.734 "mibps": 70.1673548819697, 00:38:22.734 "io_failed": 0, 00:38:22.734 "io_timeout": 0, 00:38:22.734 "avg_latency_us": 7113.408628321038, 00:38:22.734 "min_latency_us": 3631.786666666667, 00:38:22.734 "max_latency_us": 20643.84 00:38:22.734 } 00:38:22.734 ], 00:38:22.734 "core_count": 1 00:38:22.734 } 00:38:22.995 14:20:09 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:22.995 14:20:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:22.995 14:20:09 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:22.995 14:20:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:22.995 14:20:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:22.995 14:20:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:22.995 14:20:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:22.995 14:20:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:23.255 14:20:09 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:23.255 14:20:09 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:23.255 14:20:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:23.255 14:20:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:23.255 14:20:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:23.255 14:20:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:23.255 14:20:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:23.518 14:20:09 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:23.518 14:20:09 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:23.518 14:20:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:23.518 14:20:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:23.518 14:20:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:23.518 14:20:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:23.518 14:20:09 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:23.518 14:20:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:23.518 14:20:09 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:23.518 14:20:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:23.518 [2024-11-06 14:20:09.721965] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:23.518 [2024-11-06 14:20:09.722371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d3cb0 (107): Transport endpoint is not connected 00:38:23.518 [2024-11-06 14:20:09.723367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d3cb0 (9): Bad file descriptor 00:38:23.518 [2024-11-06 14:20:09.724369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:23.518 [2024-11-06 14:20:09.724377] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:23.518 [2024-11-06 14:20:09.724384] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:23.518 [2024-11-06 14:20:09.724390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:23.518 request: 00:38:23.518 { 00:38:23.518 "name": "nvme0", 00:38:23.518 "trtype": "tcp", 00:38:23.518 "traddr": "127.0.0.1", 00:38:23.518 "adrfam": "ipv4", 00:38:23.518 "trsvcid": "4420", 00:38:23.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:23.519 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:23.519 "prchk_reftag": false, 00:38:23.519 "prchk_guard": false, 00:38:23.519 "hdgst": false, 00:38:23.519 "ddgst": false, 00:38:23.519 "psk": "key1", 00:38:23.519 "allow_unrecognized_csi": false, 00:38:23.519 "method": "bdev_nvme_attach_controller", 00:38:23.519 "req_id": 1 00:38:23.519 } 00:38:23.519 Got JSON-RPC error response 00:38:23.519 response: 00:38:23.519 { 00:38:23.519 "code": -5, 00:38:23.519 "message": "Input/output error" 00:38:23.519 } 00:38:23.519 14:20:09 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:23.519 14:20:09 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:23.519 14:20:09 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:23.519 14:20:09 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:23.519 14:20:09 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:23.519 14:20:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:23.519 14:20:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:23.519 14:20:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:23.519 14:20:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:23.519 14:20:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:23.779 14:20:09 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:23.779 14:20:09 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:23.779 14:20:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:23.779 14:20:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:23.779 14:20:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:23.779 14:20:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:23.779 14:20:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:24.039 14:20:10 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:24.039 14:20:10 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:24.039 14:20:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:24.039 14:20:10 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:24.039 14:20:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:24.300 14:20:10 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:24.300 14:20:10 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:24.300 14:20:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.560 14:20:10 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:24.560 14:20:10 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.QYU80wIGX8 00:38:24.560 14:20:10 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.QYU80wIGX8 00:38:24.560 14:20:10 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:24.560 14:20:10 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.QYU80wIGX8 00:38:24.560 14:20:10 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:24.560 14:20:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:24.560 14:20:10 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:24.560 14:20:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:24.560 14:20:10 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QYU80wIGX8 00:38:24.560 14:20:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QYU80wIGX8 00:38:24.560 [2024-11-06 14:20:10.778102] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.QYU80wIGX8': 0100660 00:38:24.560 [2024-11-06 14:20:10.778122] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:24.560 request: 00:38:24.560 { 00:38:24.560 "name": "key0", 00:38:24.560 "path": "/tmp/tmp.QYU80wIGX8", 00:38:24.560 "method": "keyring_file_add_key", 00:38:24.560 "req_id": 1 00:38:24.560 } 00:38:24.560 Got JSON-RPC error response 00:38:24.560 response: 00:38:24.560 { 00:38:24.560 "code": -1, 00:38:24.560 "message": "Operation not permitted" 00:38:24.560 } 00:38:24.560 14:20:10 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:24.560 14:20:10 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:24.560 14:20:10 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:24.560 14:20:10 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:24.560 14:20:10 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.QYU80wIGX8 00:38:24.560 14:20:10 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QYU80wIGX8 00:38:24.560 14:20:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QYU80wIGX8 00:38:24.820 14:20:10 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.QYU80wIGX8 00:38:24.820 14:20:10 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:24.820 14:20:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:24.820 14:20:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:24.820 14:20:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.820 14:20:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:24.820 14:20:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:25.080 14:20:11 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:25.080 14:20:11 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:25.080 14:20:11 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:25.080 14:20:11 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:25.081 14:20:11 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:25.081 14:20:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:25.081 14:20:11 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:25.081 14:20:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:25.081 14:20:11 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:25.081 14:20:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:25.081 [2024-11-06 14:20:11.319479] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.QYU80wIGX8': No such file or directory 00:38:25.081 [2024-11-06 14:20:11.319494] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:25.081 [2024-11-06 14:20:11.319507] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:25.081 [2024-11-06 14:20:11.319513] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:25.081 [2024-11-06 14:20:11.319518] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:25.081 [2024-11-06 14:20:11.319523] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:25.081 request: 00:38:25.081 { 00:38:25.081 "name": "nvme0", 00:38:25.081 "trtype": "tcp", 00:38:25.081 "traddr": "127.0.0.1", 00:38:25.081 "adrfam": "ipv4", 00:38:25.081 "trsvcid": "4420", 00:38:25.081 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:25.081 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:25.081 "prchk_reftag": false, 00:38:25.081 "prchk_guard": false, 00:38:25.081 "hdgst": false, 00:38:25.081 "ddgst": false, 00:38:25.081 "psk": "key0", 00:38:25.081 "allow_unrecognized_csi": false, 00:38:25.081 "method": "bdev_nvme_attach_controller", 00:38:25.081 "req_id": 1 00:38:25.081 } 00:38:25.081 Got JSON-RPC error response 00:38:25.081 response: 00:38:25.081 { 00:38:25.081 "code": -19, 00:38:25.081 "message": "No such device" 00:38:25.081 } 00:38:25.081 14:20:11 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:25.081 14:20:11 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:25.081 14:20:11 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:25.081 14:20:11 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:25.081 14:20:11 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:25.081 14:20:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:25.342 14:20:11 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:25.342 14:20:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:25.342 14:20:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:25.342 14:20:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:25.342 14:20:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:25.342 14:20:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:25.342 14:20:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FOkBYtjrOn 00:38:25.342 14:20:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:25.342 14:20:11 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:25.342 14:20:11 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:25.342 14:20:11 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:25.342 14:20:11 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:25.342 14:20:11 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:25.342 14:20:11 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:25.342 14:20:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FOkBYtjrOn 00:38:25.342 14:20:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FOkBYtjrOn 00:38:25.342 14:20:11 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.FOkBYtjrOn 00:38:25.342 14:20:11 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FOkBYtjrOn 00:38:25.342 14:20:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FOkBYtjrOn 00:38:25.602 14:20:11 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:25.602 14:20:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:25.862 nvme0n1 00:38:25.862 14:20:11 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:25.862 14:20:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:25.862 14:20:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:25.862 14:20:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:25.862 14:20:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:25.862 14:20:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:26.123 14:20:12 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:26.123 14:20:12 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:26.123 14:20:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:26.123 14:20:12 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:26.123 14:20:12 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:26.123 14:20:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:26.123 14:20:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:26.123 14:20:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:26.382 14:20:12 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:26.382 14:20:12 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:26.382 14:20:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:26.382 14:20:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:26.382 14:20:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:26.382 14:20:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:26.382 14:20:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:26.642 14:20:12 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:26.642 14:20:12 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:26.642 14:20:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:26.642 14:20:12 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:26.642 14:20:12 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:26.642 14:20:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:26.902 14:20:13 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:26.902 14:20:13 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FOkBYtjrOn 00:38:26.902 14:20:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FOkBYtjrOn 00:38:27.162 14:20:13 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.dQbBjl8Z9n 00:38:27.162 14:20:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.dQbBjl8Z9n 00:38:27.162 14:20:13 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:27.162 14:20:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:27.423 nvme0n1 00:38:27.423 14:20:13 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:27.423 14:20:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:27.683 14:20:13 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:27.683 "subsystems": [ 00:38:27.683 { 00:38:27.683 "subsystem": "keyring", 00:38:27.683 "config": [ 00:38:27.683 { 00:38:27.683 "method": "keyring_file_add_key", 00:38:27.683 "params": { 00:38:27.683 "name": "key0", 00:38:27.683 "path": "/tmp/tmp.FOkBYtjrOn" 00:38:27.683 } 00:38:27.683 }, 00:38:27.683 { 00:38:27.683 "method": "keyring_file_add_key", 00:38:27.683 "params": { 00:38:27.683 "name": "key1", 00:38:27.683 "path": "/tmp/tmp.dQbBjl8Z9n" 00:38:27.683 } 00:38:27.683 } 00:38:27.683 ] 00:38:27.683 }, 00:38:27.683 { 00:38:27.683 "subsystem": "iobuf", 00:38:27.683 "config": [ 00:38:27.683 { 00:38:27.683 "method": "iobuf_set_options", 00:38:27.683 "params": { 00:38:27.683 "small_pool_count": 8192, 00:38:27.683 "large_pool_count": 1024, 00:38:27.683 "small_bufsize": 8192, 00:38:27.683 "large_bufsize": 135168, 00:38:27.683 "enable_numa": false 00:38:27.683 } 00:38:27.683 } 00:38:27.683 ] 00:38:27.683 }, 00:38:27.683 { 00:38:27.683 "subsystem": "sock", 00:38:27.683 "config": [ 00:38:27.683 { 00:38:27.683 "method": "sock_set_default_impl", 00:38:27.683 "params": { 00:38:27.683 "impl_name": "posix" 00:38:27.683 } 00:38:27.683 }, 00:38:27.683 { 00:38:27.683 "method": "sock_impl_set_options", 00:38:27.683 "params": { 00:38:27.683 "impl_name": "ssl", 00:38:27.683 "recv_buf_size": 4096, 00:38:27.683 "send_buf_size": 4096, 00:38:27.683 "enable_recv_pipe": true, 00:38:27.683 "enable_quickack": false, 00:38:27.683 "enable_placement_id": 0, 00:38:27.683 "enable_zerocopy_send_server": true, 00:38:27.683 "enable_zerocopy_send_client": false, 00:38:27.683 "zerocopy_threshold": 0, 00:38:27.683 "tls_version": 0, 00:38:27.683 "enable_ktls": false 00:38:27.683 } 00:38:27.683 }, 00:38:27.683 { 00:38:27.683 "method": "sock_impl_set_options", 00:38:27.683 "params": { 00:38:27.683 "impl_name": "posix", 00:38:27.683 "recv_buf_size": 2097152, 00:38:27.683 "send_buf_size": 2097152, 00:38:27.683 "enable_recv_pipe": true, 00:38:27.683 "enable_quickack": false, 00:38:27.683 "enable_placement_id": 0, 00:38:27.683 "enable_zerocopy_send_server": true, 00:38:27.683 "enable_zerocopy_send_client": false, 00:38:27.683 "zerocopy_threshold": 0, 00:38:27.683 "tls_version": 0, 00:38:27.683 "enable_ktls": false 00:38:27.683 } 00:38:27.683 } 00:38:27.683 ] 00:38:27.683 }, 00:38:27.683 { 00:38:27.683 "subsystem": "vmd", 00:38:27.683 "config": [] 00:38:27.683 }, 00:38:27.683 { 00:38:27.683 "subsystem": "accel", 00:38:27.683 "config": [ 00:38:27.683 { 00:38:27.683 "method": "accel_set_options", 00:38:27.683 "params": { 00:38:27.683 "small_cache_size": 128, 00:38:27.683 "large_cache_size": 16, 00:38:27.683 "task_count": 2048, 00:38:27.683 "sequence_count": 2048, 00:38:27.683 "buf_count": 2048 00:38:27.683 } 00:38:27.683 } 00:38:27.683 ] 00:38:27.683 }, 00:38:27.683 { 00:38:27.683 "subsystem": "bdev", 00:38:27.683 "config": [ 00:38:27.683 { 00:38:27.683 "method": "bdev_set_options", 00:38:27.683 "params": { 00:38:27.683 "bdev_io_pool_size": 65535, 00:38:27.683 "bdev_io_cache_size": 256, 00:38:27.683 "bdev_auto_examine": true, 00:38:27.683 "iobuf_small_cache_size": 128, 00:38:27.683 "iobuf_large_cache_size": 16 00:38:27.683 } 00:38:27.683 }, 00:38:27.683 { 00:38:27.683 "method": "bdev_raid_set_options", 00:38:27.683 "params": { 00:38:27.683 "process_window_size_kb": 1024, 00:38:27.683 "process_max_bandwidth_mb_sec": 0 00:38:27.683 } 00:38:27.683 }, 00:38:27.683 { 00:38:27.683 "method": "bdev_iscsi_set_options", 00:38:27.683 "params": { 00:38:27.683 "timeout_sec": 30 00:38:27.683 } 00:38:27.683 }, 00:38:27.683 { 00:38:27.683 "method": "bdev_nvme_set_options", 00:38:27.683 "params": { 00:38:27.683 "action_on_timeout": "none", 00:38:27.683 "timeout_us": 0, 00:38:27.683 "timeout_admin_us": 0, 00:38:27.683 "keep_alive_timeout_ms": 10000, 00:38:27.683 "arbitration_burst": 0, 00:38:27.683 "low_priority_weight": 0, 00:38:27.684 "medium_priority_weight": 0, 00:38:27.684 "high_priority_weight": 0, 00:38:27.684 "nvme_adminq_poll_period_us": 10000, 00:38:27.684 "nvme_ioq_poll_period_us": 0, 00:38:27.684 "io_queue_requests": 512, 00:38:27.684 "delay_cmd_submit": true, 00:38:27.684 "transport_retry_count": 4, 00:38:27.684 "bdev_retry_count": 3, 00:38:27.684 "transport_ack_timeout": 0, 00:38:27.684 "ctrlr_loss_timeout_sec": 0, 00:38:27.684 "reconnect_delay_sec": 0, 00:38:27.684 "fast_io_fail_timeout_sec": 0, 00:38:27.684 "disable_auto_failback": false, 00:38:27.684 "generate_uuids": false, 00:38:27.684 "transport_tos": 0, 00:38:27.684 "nvme_error_stat": false, 00:38:27.684 "rdma_srq_size": 0, 00:38:27.684 "io_path_stat": false, 00:38:27.684 "allow_accel_sequence": false, 00:38:27.684 "rdma_max_cq_size": 0, 00:38:27.684 "rdma_cm_event_timeout_ms": 0, 00:38:27.684 "dhchap_digests": [ 00:38:27.684 "sha256", 00:38:27.684 "sha384", 00:38:27.684 "sha512" 00:38:27.684 ], 00:38:27.684 "dhchap_dhgroups": [ 00:38:27.684 "null", 00:38:27.684 "ffdhe2048", 00:38:27.684 "ffdhe3072", 00:38:27.684 "ffdhe4096", 00:38:27.684 "ffdhe6144", 00:38:27.684 "ffdhe8192" 00:38:27.684 ] 00:38:27.684 } 00:38:27.684 }, 00:38:27.684 { 00:38:27.684 "method": "bdev_nvme_attach_controller", 00:38:27.684 "params": { 00:38:27.684 "name": "nvme0", 00:38:27.684 "trtype": "TCP", 00:38:27.684 "adrfam": "IPv4", 00:38:27.684 "traddr": "127.0.0.1", 00:38:27.684 "trsvcid": "4420", 00:38:27.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:27.684 "prchk_reftag": false, 00:38:27.684 "prchk_guard": false, 00:38:27.684 "ctrlr_loss_timeout_sec": 0, 00:38:27.684 "reconnect_delay_sec": 0, 00:38:27.684 "fast_io_fail_timeout_sec": 0, 00:38:27.684 "psk": "key0", 00:38:27.684 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:27.684 "hdgst": false, 00:38:27.684 "ddgst": false, 00:38:27.684 "multipath": "multipath" 00:38:27.684 } 00:38:27.684 }, 00:38:27.684 { 00:38:27.684 "method": "bdev_nvme_set_hotplug", 00:38:27.684 "params": { 00:38:27.684 "period_us": 100000, 00:38:27.684 "enable": false 00:38:27.684 } 00:38:27.684 }, 00:38:27.684 { 00:38:27.684 "method": "bdev_wait_for_examine" 00:38:27.684 } 00:38:27.684 ] 00:38:27.684 }, 00:38:27.684 { 00:38:27.684 "subsystem": "nbd", 00:38:27.684 "config": [] 00:38:27.684 } 00:38:27.684 ] 00:38:27.684 }' 00:38:27.684 14:20:13 keyring_file -- keyring/file.sh@115 -- # killprocess 2743904 00:38:27.684 14:20:13 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 2743904 ']' 00:38:27.684 14:20:13 keyring_file -- common/autotest_common.sh@956 -- # kill -0 2743904 00:38:27.684 14:20:13 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:27.684 14:20:13 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:27.684 14:20:13 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2743904 00:38:27.684 14:20:13 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:27.684 14:20:13 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:27.684 14:20:13 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2743904' 00:38:27.684 killing process with pid 2743904 00:38:27.684 14:20:13 keyring_file -- common/autotest_common.sh@971 -- # kill 2743904 00:38:27.684 Received shutdown signal, test time was about 1.000000 seconds 00:38:27.684 00:38:27.684 Latency(us) 00:38:27.684 [2024-11-06T13:20:13.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:27.684 [2024-11-06T13:20:13.964Z] =================================================================================================================== 00:38:27.684 [2024-11-06T13:20:13.964Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:27.684 14:20:13 keyring_file -- common/autotest_common.sh@976 -- # wait 2743904 00:38:27.944 14:20:14 keyring_file -- keyring/file.sh@118 -- # bperfpid=2745717 00:38:27.944 14:20:14 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2745717 /var/tmp/bperf.sock 00:38:27.944 14:20:14 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 2745717 ']' 00:38:27.944 14:20:14 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:27.944 14:20:14 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:27.944 14:20:14 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:27.944 14:20:14 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:27.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:27.944 14:20:14 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:27.944 14:20:14 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:27.944 "subsystems": [ 00:38:27.944 { 00:38:27.944 "subsystem": "keyring", 00:38:27.944 "config": [ 00:38:27.944 { 00:38:27.944 "method": "keyring_file_add_key", 00:38:27.944 "params": { 00:38:27.944 "name": "key0", 00:38:27.944 "path": "/tmp/tmp.FOkBYtjrOn" 00:38:27.944 } 00:38:27.944 }, 00:38:27.944 { 00:38:27.944 "method": "keyring_file_add_key", 00:38:27.944 "params": { 00:38:27.944 "name": "key1", 00:38:27.944 "path": "/tmp/tmp.dQbBjl8Z9n" 00:38:27.944 } 00:38:27.944 } 00:38:27.944 ] 00:38:27.944 }, 00:38:27.944 { 00:38:27.944 "subsystem": "iobuf", 00:38:27.944 "config": [ 00:38:27.944 { 00:38:27.944 "method": "iobuf_set_options", 00:38:27.944 "params": { 00:38:27.944 "small_pool_count": 8192, 00:38:27.944 "large_pool_count": 1024, 00:38:27.944 "small_bufsize": 8192, 00:38:27.944 "large_bufsize": 135168, 00:38:27.944 "enable_numa": false 00:38:27.944 } 00:38:27.944 } 00:38:27.944 ] 00:38:27.944 }, 00:38:27.944 { 00:38:27.945 "subsystem": "sock", 00:38:27.945 "config": [ 00:38:27.945 { 00:38:27.945 "method": "sock_set_default_impl", 00:38:27.945 "params": { 00:38:27.945 "impl_name": "posix" 00:38:27.945 } 00:38:27.945 }, 00:38:27.945 { 00:38:27.945 "method": "sock_impl_set_options", 00:38:27.945 "params": { 00:38:27.945 "impl_name": "ssl", 00:38:27.945 "recv_buf_size": 4096, 00:38:27.945 "send_buf_size": 4096, 00:38:27.945 "enable_recv_pipe": true, 00:38:27.945 "enable_quickack": false, 00:38:27.945 "enable_placement_id": 0, 00:38:27.945 "enable_zerocopy_send_server": true, 00:38:27.945 "enable_zerocopy_send_client": false, 00:38:27.945 "zerocopy_threshold": 0, 00:38:27.945 "tls_version": 0, 00:38:27.945 "enable_ktls": false 00:38:27.945 } 00:38:27.945 }, 00:38:27.945 { 00:38:27.945 "method": "sock_impl_set_options", 00:38:27.945 "params": { 00:38:27.945 "impl_name": "posix", 00:38:27.945 "recv_buf_size": 2097152, 00:38:27.945 "send_buf_size": 2097152, 00:38:27.945 "enable_recv_pipe": true, 00:38:27.945 "enable_quickack": false, 00:38:27.945 "enable_placement_id": 0, 00:38:27.945 "enable_zerocopy_send_server": true, 00:38:27.945 "enable_zerocopy_send_client": false, 00:38:27.945 "zerocopy_threshold": 0, 00:38:27.945 "tls_version": 0, 00:38:27.945 "enable_ktls": false 00:38:27.945 } 00:38:27.945 } 00:38:27.945 ] 00:38:27.945 }, 00:38:27.945 { 00:38:27.945 "subsystem": "vmd", 00:38:27.945 "config": [] 00:38:27.945 }, 00:38:27.945 { 00:38:27.945 "subsystem": "accel", 00:38:27.945 "config": [ 00:38:27.945 { 00:38:27.945 "method": "accel_set_options", 00:38:27.945 "params": { 00:38:27.945 "small_cache_size": 128, 00:38:27.945 "large_cache_size": 16, 00:38:27.945 "task_count": 2048, 00:38:27.945 "sequence_count": 2048, 00:38:27.945 "buf_count": 2048 00:38:27.945 } 00:38:27.945 } 00:38:27.945 ] 00:38:27.945 }, 00:38:27.945 { 00:38:27.945 "subsystem": "bdev", 00:38:27.945 "config": [ 00:38:27.945 { 00:38:27.945 "method": "bdev_set_options", 00:38:27.945 "params": { 00:38:27.945 "bdev_io_pool_size": 65535, 00:38:27.945 "bdev_io_cache_size": 256, 00:38:27.945 "bdev_auto_examine": true, 00:38:27.945 "iobuf_small_cache_size": 128, 00:38:27.945 "iobuf_large_cache_size": 16 00:38:27.945 } 00:38:27.945 }, 00:38:27.945 { 00:38:27.945 "method": "bdev_raid_set_options", 00:38:27.945 "params": { 00:38:27.945 "process_window_size_kb": 1024, 00:38:27.945 "process_max_bandwidth_mb_sec": 0 00:38:27.945 } 00:38:27.945 }, 00:38:27.945 { 00:38:27.945 "method": "bdev_iscsi_set_options", 00:38:27.945 "params": { 00:38:27.945 "timeout_sec": 30 00:38:27.945 } 00:38:27.945 }, 00:38:27.945 { 00:38:27.945 "method": "bdev_nvme_set_options", 00:38:27.945 "params": { 00:38:27.945 "action_on_timeout": "none", 00:38:27.945 "timeout_us": 0, 00:38:27.945 "timeout_admin_us": 0, 00:38:27.945 "keep_alive_timeout_ms": 10000, 00:38:27.945 "arbitration_burst": 0, 00:38:27.945 "low_priority_weight": 0, 00:38:27.945 "medium_priority_weight": 0, 00:38:27.945 "high_priority_weight": 0, 00:38:27.945 "nvme_adminq_poll_period_us": 10000, 00:38:27.945 "nvme_ioq_poll_period_us": 0, 00:38:27.945 "io_queue_requests": 512, 00:38:27.945 "delay_cmd_submit": true, 00:38:27.945 "transport_retry_count": 4, 00:38:27.945 "bdev_retry_count": 3, 00:38:27.945 "transport_ack_timeout": 0, 00:38:27.945 "ctrlr_loss_timeout_sec": 0, 00:38:27.945 "reconnect_delay_sec": 0, 00:38:27.945 "fast_io_fail_timeout_sec": 0, 00:38:27.945 "disable_auto_failback": false, 00:38:27.945 "generate_uuids": false, 00:38:27.945 "transport_tos": 0, 00:38:27.945 "nvme_error_stat": false, 00:38:27.945 "rdma_srq_size": 0, 00:38:27.945 "io_path_stat": false, 00:38:27.945 "allow_accel_sequence": false, 00:38:27.945 "rdma_max_cq_size": 0, 00:38:27.945 "rdma_cm_event_timeout_ms": 0, 00:38:27.945 "dhchap_digests": [ 00:38:27.945 "sha256", 00:38:27.945 "sha384", 00:38:27.945 "sha512" 00:38:27.945 ], 00:38:27.945 "dhchap_dhgroups": [ 00:38:27.945 "null", 00:38:27.945 "ffdhe2048", 00:38:27.945 "ffdhe3072", 00:38:27.945 "ffdhe4096", 00:38:27.945 "ffdhe6144", 00:38:27.945 "ffdhe8192" 00:38:27.945 ] 00:38:27.945 } 00:38:27.945 }, 00:38:27.945 { 00:38:27.945 "method": "bdev_nvme_attach_controller", 00:38:27.945 "params": { 00:38:27.945 "name": "nvme0", 00:38:27.945 "trtype": "TCP", 00:38:27.945 "adrfam": "IPv4", 00:38:27.945 "traddr": "127.0.0.1", 00:38:27.945 "trsvcid": "4420", 00:38:27.945 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:27.945 "prchk_reftag": false, 00:38:27.945 "prchk_guard": false, 00:38:27.945 "ctrlr_loss_timeout_sec": 0, 00:38:27.945 "reconnect_delay_sec": 0, 00:38:27.945 "fast_io_fail_timeout_sec": 0, 00:38:27.945 "psk": "key0", 00:38:27.945 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:27.945 "hdgst": false, 00:38:27.945 "ddgst": false, 00:38:27.945 "multipath": "multipath" 00:38:27.945 } 00:38:27.945 }, 00:38:27.945 { 00:38:27.945 "method": "bdev_nvme_set_hotplug", 00:38:27.945 "params": { 00:38:27.945 "period_us": 100000, 00:38:27.945 "enable": false 00:38:27.945 } 00:38:27.945 }, 00:38:27.945 { 00:38:27.945 "method": "bdev_wait_for_examine" 00:38:27.945 } 00:38:27.945 ] 00:38:27.945 }, 00:38:27.945 { 00:38:27.945 "subsystem": "nbd", 00:38:27.945 "config": [] 00:38:27.945 } 00:38:27.945 ] 00:38:27.945 }' 00:38:27.945 14:20:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:27.945 [2024-11-06 14:20:14.106723] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:38:27.945 [2024-11-06 14:20:14.106786] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2745717 ] 00:38:27.945 [2024-11-06 14:20:14.190156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.945 [2024-11-06 14:20:14.219740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:28.206 [2024-11-06 14:20:14.364404] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:28.776 14:20:14 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:28.777 14:20:14 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:28.777 14:20:14 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:28.777 14:20:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:28.777 14:20:14 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:29.037 14:20:15 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:29.037 14:20:15 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:29.037 14:20:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:29.037 14:20:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:29.037 14:20:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:29.037 14:20:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:29.037 14:20:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:29.037 14:20:15 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:29.037 14:20:15 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:29.037 14:20:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:29.037 14:20:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:29.037 14:20:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:29.038 14:20:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:29.038 14:20:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:29.298 14:20:15 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:29.298 14:20:15 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:29.298 14:20:15 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:29.298 14:20:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:29.557 14:20:15 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:29.557 14:20:15 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:29.557 14:20:15 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.FOkBYtjrOn /tmp/tmp.dQbBjl8Z9n 00:38:29.557 14:20:15 keyring_file -- keyring/file.sh@20 -- # killprocess 2745717 00:38:29.557 14:20:15 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 2745717 ']' 00:38:29.557 14:20:15 keyring_file -- common/autotest_common.sh@956 -- # kill -0 2745717 00:38:29.557 14:20:15 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:29.557 14:20:15 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:29.557 14:20:15 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2745717 00:38:29.557 14:20:15 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:29.557 14:20:15 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:29.557 14:20:15 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2745717' 00:38:29.557 killing process with pid 2745717 00:38:29.557 14:20:15 keyring_file -- common/autotest_common.sh@971 -- # kill 2745717 00:38:29.557 Received shutdown signal, test time was about 1.000000 seconds 00:38:29.557 00:38:29.557 Latency(us) 00:38:29.557 [2024-11-06T13:20:15.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:29.557 [2024-11-06T13:20:15.838Z] =================================================================================================================== 00:38:29.558 [2024-11-06T13:20:15.838Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:29.558 14:20:15 keyring_file -- common/autotest_common.sh@976 -- # wait 2745717 00:38:29.558 14:20:15 keyring_file -- keyring/file.sh@21 -- # killprocess 2743821 00:38:29.558 14:20:15 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 2743821 ']' 00:38:29.558 14:20:15 keyring_file -- common/autotest_common.sh@956 -- # kill -0 2743821 00:38:29.558 14:20:15 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:29.558 14:20:15 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:29.558 14:20:15 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2743821 00:38:29.818 14:20:15 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:29.818 14:20:15 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:29.818 14:20:15 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2743821' 00:38:29.818 killing process with pid 2743821 00:38:29.818 14:20:15 keyring_file -- common/autotest_common.sh@971 -- # kill 2743821 00:38:29.818 14:20:15 keyring_file -- common/autotest_common.sh@976 -- # wait 2743821 00:38:29.818 00:38:29.818 real 0m12.142s 00:38:29.818 user 0m29.287s 00:38:29.818 sys 0m2.735s 00:38:29.818 14:20:16 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:29.818 14:20:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:29.818 ************************************ 00:38:29.818 END TEST keyring_file 00:38:29.818 ************************************ 00:38:29.818 14:20:16 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:38:29.818 14:20:16 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:29.818 14:20:16 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:38:29.818 14:20:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:29.818 14:20:16 -- common/autotest_common.sh@10 -- # set +x 00:38:30.079 ************************************ 00:38:30.079 START TEST keyring_linux 00:38:30.079 ************************************ 00:38:30.079 14:20:16 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:30.079 Joined session keyring: 194300199 00:38:30.079 * Looking for test storage... 00:38:30.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:30.079 14:20:16 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:30.079 14:20:16 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:38:30.079 14:20:16 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:30.079 14:20:16 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:30.079 14:20:16 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:30.079 14:20:16 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:30.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.079 --rc genhtml_branch_coverage=1 00:38:30.079 --rc genhtml_function_coverage=1 00:38:30.079 --rc genhtml_legend=1 00:38:30.079 --rc geninfo_all_blocks=1 00:38:30.079 --rc geninfo_unexecuted_blocks=1 00:38:30.079 00:38:30.079 ' 00:38:30.079 14:20:16 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:30.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.079 --rc genhtml_branch_coverage=1 00:38:30.079 --rc genhtml_function_coverage=1 00:38:30.079 --rc genhtml_legend=1 00:38:30.079 --rc geninfo_all_blocks=1 00:38:30.079 --rc geninfo_unexecuted_blocks=1 00:38:30.079 00:38:30.079 ' 00:38:30.079 14:20:16 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:30.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.079 --rc genhtml_branch_coverage=1 00:38:30.079 --rc genhtml_function_coverage=1 00:38:30.079 --rc genhtml_legend=1 00:38:30.079 --rc geninfo_all_blocks=1 00:38:30.079 --rc geninfo_unexecuted_blocks=1 00:38:30.079 00:38:30.079 ' 00:38:30.079 14:20:16 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:30.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.079 --rc genhtml_branch_coverage=1 00:38:30.079 --rc genhtml_function_coverage=1 00:38:30.079 --rc genhtml_legend=1 00:38:30.079 --rc geninfo_all_blocks=1 00:38:30.079 --rc geninfo_unexecuted_blocks=1 00:38:30.079 00:38:30.079 ' 00:38:30.079 14:20:16 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:30.079 14:20:16 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:30.079 14:20:16 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:30.079 14:20:16 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.079 14:20:16 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.079 14:20:16 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.079 14:20:16 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:30.079 14:20:16 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:30.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:30.079 14:20:16 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:30.079 14:20:16 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:30.079 14:20:16 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:30.079 14:20:16 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:30.079 14:20:16 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:30.079 14:20:16 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:30.079 14:20:16 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:30.079 14:20:16 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:30.079 14:20:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:30.080 14:20:16 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:30.080 14:20:16 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:30.080 14:20:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:30.080 14:20:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:30.080 14:20:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:30.080 14:20:16 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:30.080 14:20:16 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:30.080 14:20:16 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:30.080 14:20:16 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:30.080 14:20:16 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:30.080 14:20:16 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:30.340 14:20:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:30.340 14:20:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:30.340 /tmp/:spdk-test:key0 00:38:30.340 14:20:16 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:30.340 14:20:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:30.340 14:20:16 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:30.340 14:20:16 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:30.340 14:20:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:30.340 14:20:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:30.340 14:20:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:30.340 14:20:16 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:30.340 14:20:16 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:30.340 14:20:16 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:30.340 14:20:16 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:30.340 14:20:16 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:30.340 14:20:16 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:30.340 14:20:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:30.340 14:20:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:30.340 /tmp/:spdk-test:key1 00:38:30.340 14:20:16 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2746155 00:38:30.340 14:20:16 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2746155 00:38:30.340 14:20:16 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:30.340 14:20:16 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 2746155 ']' 00:38:30.340 14:20:16 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:30.340 14:20:16 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:30.340 14:20:16 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:30.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:30.340 14:20:16 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:30.340 14:20:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:30.340 [2024-11-06 14:20:16.494622] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:38:30.340 [2024-11-06 14:20:16.494678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2746155 ] 00:38:30.340 [2024-11-06 14:20:16.578158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.340 [2024-11-06 14:20:16.608714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:31.281 14:20:17 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:31.281 14:20:17 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:38:31.281 14:20:17 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:31.281 14:20:17 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.281 14:20:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:31.281 [2024-11-06 14:20:17.278716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:31.281 null0 00:38:31.281 [2024-11-06 14:20:17.310782] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:31.281 [2024-11-06 14:20:17.311142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:31.281 14:20:17 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.281 14:20:17 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:31.281 288459334 00:38:31.281 14:20:17 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:31.281 223016640 00:38:31.281 14:20:17 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2746488 00:38:31.281 14:20:17 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2746488 /var/tmp/bperf.sock 00:38:31.281 14:20:17 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:31.281 14:20:17 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 2746488 ']' 00:38:31.281 14:20:17 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:31.281 14:20:17 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:31.281 14:20:17 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:31.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:31.281 14:20:17 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:31.281 14:20:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:31.281 [2024-11-06 14:20:17.389061] Starting SPDK v25.01-pre git sha1 159fecd99 / DPDK 24.03.0 initialization... 00:38:31.281 [2024-11-06 14:20:17.389109] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2746488 ] 00:38:31.281 [2024-11-06 14:20:17.474028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:31.281 [2024-11-06 14:20:17.503902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:32.222 14:20:18 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:32.222 14:20:18 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:38:32.222 14:20:18 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:32.222 14:20:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:32.222 14:20:18 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:32.222 14:20:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:32.482 14:20:18 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:32.482 14:20:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:32.482 [2024-11-06 14:20:18.701466] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:32.742 nvme0n1 00:38:32.742 14:20:18 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:32.742 14:20:18 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:32.742 14:20:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:32.742 14:20:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:32.742 14:20:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:32.742 14:20:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:32.742 14:20:18 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:32.742 14:20:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:32.742 14:20:18 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:32.742 14:20:18 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:32.742 14:20:18 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:32.742 14:20:18 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:32.742 14:20:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:33.003 14:20:19 keyring_linux -- keyring/linux.sh@25 -- # sn=288459334 00:38:33.003 14:20:19 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:33.003 14:20:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:33.003 14:20:19 keyring_linux -- keyring/linux.sh@26 -- # [[ 288459334 == \2\8\8\4\5\9\3\3\4 ]] 00:38:33.003 14:20:19 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 288459334 00:38:33.003 14:20:19 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:33.003 14:20:19 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:33.003 Running I/O for 1 seconds... 00:38:34.384 24465.00 IOPS, 95.57 MiB/s 00:38:34.384 Latency(us) 00:38:34.384 [2024-11-06T13:20:20.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:34.384 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:34.384 nvme0n1 : 1.01 24466.38 95.57 0.00 0.00 5216.21 4014.08 13271.04 00:38:34.384 [2024-11-06T13:20:20.664Z] =================================================================================================================== 00:38:34.384 [2024-11-06T13:20:20.664Z] Total : 24466.38 95.57 0.00 0.00 5216.21 4014.08 13271.04 00:38:34.384 { 00:38:34.384 "results": [ 00:38:34.384 { 00:38:34.384 "job": "nvme0n1", 00:38:34.384 "core_mask": "0x2", 00:38:34.384 "workload": "randread", 00:38:34.384 "status": "finished", 00:38:34.384 "queue_depth": 128, 00:38:34.384 "io_size": 4096, 00:38:34.384 "runtime": 1.005216, 00:38:34.384 "iops": 24466.383344475216, 00:38:34.384 "mibps": 95.57180993935631, 00:38:34.384 "io_failed": 0, 00:38:34.384 "io_timeout": 0, 00:38:34.384 "avg_latency_us": 5216.207313436881, 00:38:34.384 "min_latency_us": 4014.08, 00:38:34.384 "max_latency_us": 13271.04 00:38:34.384 } 00:38:34.384 ], 00:38:34.384 "core_count": 1 00:38:34.384 } 00:38:34.384 14:20:20 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:34.384 14:20:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:34.384 14:20:20 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:34.384 14:20:20 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:34.384 14:20:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:34.384 14:20:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:34.384 14:20:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:34.384 14:20:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:34.384 14:20:20 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:34.384 14:20:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:34.384 14:20:20 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:34.384 14:20:20 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:34.384 14:20:20 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:38:34.384 14:20:20 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:34.384 14:20:20 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:34.384 14:20:20 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:34.384 14:20:20 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:34.384 14:20:20 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:34.384 14:20:20 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:34.384 14:20:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:34.645 [2024-11-06 14:20:20.799911] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:34.645 [2024-11-06 14:20:20.800164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c748d0 (107): Transport endpoint is not connected 00:38:34.645 [2024-11-06 14:20:20.801160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c748d0 (9): Bad file descriptor 00:38:34.645 [2024-11-06 14:20:20.802162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:34.645 [2024-11-06 14:20:20.802169] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:34.645 [2024-11-06 14:20:20.802175] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:34.645 [2024-11-06 14:20:20.802182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:34.645 request: 00:38:34.645 { 00:38:34.645 "name": "nvme0", 00:38:34.645 "trtype": "tcp", 00:38:34.645 "traddr": "127.0.0.1", 00:38:34.645 "adrfam": "ipv4", 00:38:34.645 "trsvcid": "4420", 00:38:34.645 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:34.645 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:34.645 "prchk_reftag": false, 00:38:34.645 "prchk_guard": false, 00:38:34.645 "hdgst": false, 00:38:34.645 "ddgst": false, 00:38:34.645 "psk": ":spdk-test:key1", 00:38:34.645 "allow_unrecognized_csi": false, 00:38:34.645 "method": "bdev_nvme_attach_controller", 00:38:34.645 "req_id": 1 00:38:34.645 } 00:38:34.645 Got JSON-RPC error response 00:38:34.645 response: 00:38:34.645 { 00:38:34.645 "code": -5, 00:38:34.645 "message": "Input/output error" 00:38:34.645 } 00:38:34.645 14:20:20 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:38:34.645 14:20:20 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:34.645 14:20:20 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:34.645 14:20:20 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:34.645 14:20:20 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:34.645 14:20:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:34.645 14:20:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:34.645 14:20:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:34.645 14:20:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:34.645 14:20:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:34.645 14:20:20 keyring_linux -- keyring/linux.sh@33 -- # sn=288459334 00:38:34.645 14:20:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 288459334 00:38:34.645 1 links removed 00:38:34.645 14:20:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:34.645 14:20:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:34.645 14:20:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:34.645 14:20:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:34.645 14:20:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:34.645 14:20:20 keyring_linux -- keyring/linux.sh@33 -- # sn=223016640 00:38:34.645 14:20:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 223016640 00:38:34.645 1 links removed 00:38:34.645 14:20:20 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2746488 00:38:34.645 14:20:20 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 2746488 ']' 00:38:34.645 14:20:20 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 2746488 00:38:34.645 14:20:20 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:38:34.645 14:20:20 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:34.645 14:20:20 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2746488 00:38:34.645 14:20:20 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:34.645 14:20:20 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:34.645 14:20:20 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2746488' 00:38:34.645 killing process with pid 2746488 00:38:34.645 14:20:20 keyring_linux -- common/autotest_common.sh@971 -- # kill 2746488 00:38:34.645 Received shutdown signal, test time was about 1.000000 seconds 00:38:34.645 00:38:34.645 Latency(us) 00:38:34.645 [2024-11-06T13:20:20.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:34.645 [2024-11-06T13:20:20.925Z] =================================================================================================================== 00:38:34.645 [2024-11-06T13:20:20.925Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:34.645 14:20:20 keyring_linux -- common/autotest_common.sh@976 -- # wait 2746488 00:38:34.906 14:20:20 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2746155 00:38:34.906 14:20:20 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 2746155 ']' 00:38:34.906 14:20:20 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 2746155 00:38:34.906 14:20:20 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:38:34.906 14:20:21 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:34.906 14:20:21 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2746155 00:38:34.906 14:20:21 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:34.906 14:20:21 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:34.906 14:20:21 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2746155' 00:38:34.906 killing process with pid 2746155 00:38:34.906 14:20:21 keyring_linux -- common/autotest_common.sh@971 -- # kill 2746155 00:38:34.906 14:20:21 keyring_linux -- common/autotest_common.sh@976 -- # wait 2746155 00:38:35.166 00:38:35.166 real 0m5.145s 00:38:35.166 user 0m9.593s 00:38:35.166 sys 0m1.393s 00:38:35.166 14:20:21 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:35.166 14:20:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:35.166 ************************************ 00:38:35.166 END TEST keyring_linux 00:38:35.166 ************************************ 00:38:35.166 14:20:21 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:38:35.166 14:20:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:35.166 14:20:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:35.166 14:20:21 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:38:35.166 14:20:21 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:38:35.166 14:20:21 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:38:35.166 14:20:21 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:35.166 14:20:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:35.166 14:20:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:35.166 14:20:21 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:38:35.166 14:20:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:35.166 14:20:21 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:38:35.166 14:20:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:35.166 14:20:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:35.166 14:20:21 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:38:35.166 14:20:21 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:38:35.166 14:20:21 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:38:35.166 14:20:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:35.166 14:20:21 -- common/autotest_common.sh@10 -- # set +x 00:38:35.166 14:20:21 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:38:35.166 14:20:21 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:38:35.166 14:20:21 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:38:35.166 14:20:21 -- common/autotest_common.sh@10 -- # set +x 00:38:43.302 INFO: APP EXITING 00:38:43.302 INFO: killing all VMs 00:38:43.302 INFO: killing vhost app 00:38:43.302 INFO: EXIT DONE 00:38:46.610 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:46.610 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:46.610 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:46.610 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:46.610 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:46.610 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:46.610 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:46.610 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:46.610 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:46.610 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:46.610 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:46.610 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:46.610 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:46.610 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:46.610 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:46.610 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:46.610 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:50.815 Cleaning 00:38:50.815 Removing: /var/run/dpdk/spdk0/config 00:38:50.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:50.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:50.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:50.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:50.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:50.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:50.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:50.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:50.815 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:50.815 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:50.815 Removing: /var/run/dpdk/spdk1/config 00:38:50.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:50.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:50.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:50.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:50.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:50.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:50.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:50.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:50.815 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:50.815 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:50.815 Removing: /var/run/dpdk/spdk2/config 00:38:50.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:50.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:50.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:50.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:50.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:50.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:50.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:50.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:50.815 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:50.815 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:50.815 Removing: /var/run/dpdk/spdk3/config 00:38:50.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:50.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:50.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:50.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:50.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:50.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:50.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:50.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:50.815 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:50.815 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:50.815 Removing: /var/run/dpdk/spdk4/config 00:38:50.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:50.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:50.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:50.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:50.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:50.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:50.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:50.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:50.815 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:50.815 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:50.815 Removing: /dev/shm/bdev_svc_trace.1 00:38:50.815 Removing: /dev/shm/nvmf_trace.0 00:38:50.815 Removing: /dev/shm/spdk_tgt_trace.pid2166840 00:38:50.815 Removing: /var/run/dpdk/spdk0 00:38:50.815 Removing: /var/run/dpdk/spdk1 00:38:50.815 Removing: /var/run/dpdk/spdk2 00:38:50.815 Removing: /var/run/dpdk/spdk3 00:38:50.815 Removing: /var/run/dpdk/spdk4 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2165348 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2166840 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2167695 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2168730 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2169028 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2170142 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2170217 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2170620 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2171754 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2172366 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2172712 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2173035 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2173433 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2173824 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2174186 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2174511 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2174767 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2175992 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2179263 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2179628 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2179996 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2180168 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2180601 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2180718 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2181137 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2181425 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2181679 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2181806 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2182131 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2182180 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2182742 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2182983 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2183378 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2188107 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2193361 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2205824 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2206733 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2212159 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2212510 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2217725 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2224840 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2228147 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2240772 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2251785 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2253910 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2254932 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2276586 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2281489 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2338534 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2345155 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2352142 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2360067 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2360090 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2361086 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2362110 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2363147 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2363875 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2364019 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2364215 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2364482 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2364533 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2365578 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2367008 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2368018 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2368688 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2368693 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2369022 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2370231 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2371543 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2381436 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2415761 00:38:50.815 Removing: /var/run/dpdk/spdk_pid2421351 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2423199 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2425526 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2425873 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2426034 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2426236 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2426992 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2429264 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2430379 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2431084 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2433580 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2434415 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2435229 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2440321 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2446996 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2446998 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2447000 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2452203 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2462656 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2467471 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2474732 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2476231 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2478005 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2479584 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2485073 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2490495 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2495566 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2504835 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2504841 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2510629 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2510780 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2511067 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2511644 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2511732 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2517150 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2517975 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2523323 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2526531 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2533241 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2539853 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2550080 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2559306 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2559323 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2582400 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2583196 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2584003 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2584765 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2585823 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2586505 00:38:50.816 Removing: /var/run/dpdk/spdk_pid2587192 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2587876 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2593149 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2593386 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2600688 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2600808 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2607490 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2613120 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2624516 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2625190 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2630268 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2630622 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2635698 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2642628 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2645561 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2657743 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2669079 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2671078 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2672087 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2691763 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2696515 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2699720 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2707228 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2707295 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2713556 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2716213 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2718433 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2719919 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2722229 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2723647 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2733667 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2734325 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2734991 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2737953 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2738487 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2738977 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2743821 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2743904 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2745717 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2746155 00:38:51.077 Removing: /var/run/dpdk/spdk_pid2746488 00:38:51.077 Clean 00:38:51.339 14:20:37 -- common/autotest_common.sh@1451 -- # return 0 00:38:51.339 14:20:37 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:38:51.339 14:20:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:51.339 14:20:37 -- common/autotest_common.sh@10 -- # set +x 00:38:51.339 14:20:37 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:38:51.339 14:20:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:51.339 14:20:37 -- common/autotest_common.sh@10 -- # set +x 00:38:51.339 14:20:37 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:51.339 14:20:37 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:51.339 14:20:37 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:51.339 14:20:37 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:38:51.339 14:20:37 -- spdk/autotest.sh@394 -- # hostname 00:38:51.339 14:20:37 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-13 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:51.599 geninfo: WARNING: invalid characters removed from testname! 00:39:18.185 14:21:03 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:20.094 14:21:06 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:21.981 14:21:07 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:23.361 14:21:09 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:25.276 14:21:11 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:27.883 14:21:13 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:29.445 14:21:15 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:29.445 14:21:15 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:29.445 14:21:15 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:29.445 14:21:15 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:29.445 14:21:15 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:29.445 14:21:15 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:29.445 + [[ -n 2079548 ]] 00:39:29.445 + sudo kill 2079548 00:39:29.456 [Pipeline] } 00:39:29.472 [Pipeline] // stage 00:39:29.478 [Pipeline] } 00:39:29.492 [Pipeline] // timeout 00:39:29.497 [Pipeline] } 00:39:29.511 [Pipeline] // catchError 00:39:29.516 [Pipeline] } 00:39:29.530 [Pipeline] // wrap 00:39:29.536 [Pipeline] } 00:39:29.549 [Pipeline] // catchError 00:39:29.559 [Pipeline] stage 00:39:29.561 [Pipeline] { (Epilogue) 00:39:29.574 [Pipeline] catchError 00:39:29.576 [Pipeline] { 00:39:29.588 [Pipeline] echo 00:39:29.590 Cleanup processes 00:39:29.596 [Pipeline] sh 00:39:29.887 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:29.887 2760095 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:29.943 [Pipeline] sh 00:39:30.233 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:30.233 ++ grep -v 'sudo pgrep' 00:39:30.233 ++ awk '{print $1}' 00:39:30.233 + sudo kill -9 00:39:30.233 + true 00:39:30.248 [Pipeline] sh 00:39:30.539 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:42.788 [Pipeline] sh 00:39:43.080 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:43.080 Artifacts sizes are good 00:39:43.095 [Pipeline] archiveArtifacts 00:39:43.103 Archiving artifacts 00:39:43.240 [Pipeline] sh 00:39:43.528 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:43.541 [Pipeline] cleanWs 00:39:43.549 [WS-CLEANUP] Deleting project workspace... 00:39:43.549 [WS-CLEANUP] Deferred wipeout is used... 00:39:43.555 [WS-CLEANUP] done 00:39:43.557 [Pipeline] } 00:39:43.573 [Pipeline] // catchError 00:39:43.584 [Pipeline] sh 00:39:43.870 + logger -p user.info -t JENKINS-CI 00:39:43.879 [Pipeline] } 00:39:43.893 [Pipeline] // stage 00:39:43.898 [Pipeline] } 00:39:43.912 [Pipeline] // node 00:39:43.917 [Pipeline] End of Pipeline 00:39:43.950 Finished: SUCCESS